The efficiency of an
unbiased estimator,
T, of a
parameter θ is defined as : e(T) = \frac{1/\mathcal{I}(\theta)}{\operatorname{var}(T)} where \mathcal{I}(\theta) is the
Fisher information of the sample. Thus
e(
T) is the minimum possible variance for an unbiased estimator divided by its actual variance. The
Cramér–Rao bound can be used to prove that
e(
T) ≤ 1.
Efficient estimators An
efficient estimator is an
estimator that estimates the quantity of interest in some “best possible” manner. The notion of “best possible” relies upon the choice of a particular
loss function — the function which quantifies the relative degree of undesirability of estimation errors of different magnitudes. The most common choice of the loss function is
quadratic, resulting in the
mean squared error criterion of optimality. In general, the spread of an estimator around the parameter θ is a measure of estimator efficiency and performance. This performance can be calculated by finding the mean squared error. More formally, let
T be an estimator for the parameter
θ. The mean squared error of
T is the value \operatorname{MSE}(T)=E[(T-\theta)^2], which can be decomposed as a sum of its variance and bias: : \begin{align} \operatorname{MSE}(T) & = \operatorname E[(T-\theta)^2]=\operatorname E[(T-\operatorname E[T]+\operatorname E[T]-\theta)^2] \\[5pt] & =\operatorname E[(T-\operatorname E[T])^2]+2E[T-E[T(\operatorname E[T]-\theta)+(\operatorname E[T]-\theta)^2 \\[5pt] & =\operatorname{var}(T)+(\operatorname E[T]-\theta)^2 \end{align} An estimator
T1 performs better than an estimator
T2 if \operatorname{MSE}(T_1) . For a more specific case, if
T1 and
T2 are two unbiased estimators for the same parameter θ, then the variance can be compared to determine performance. In this case,
T2 is
more efficient than
T1 if the variance of
T2 is
smaller than the variance of
T1, i.e. \operatorname{var}(T_1)>\operatorname{var}(T_2) for all values of
θ. This relationship can be determined by simplifying the more general case above for mean squared error; since the expected value of an unbiased estimator is equal to the parameter value, \operatorname E[T]=\theta. Therefore, for an unbiased estimator, \operatorname{MSE}(T)=\operatorname{var}(T), as the (\operatorname E[T]-\theta)^2 term drops out for being equal to 0. Historically, finite-sample efficiency was an early optimality criterion. However this criterion has some limitations: • Finite-sample efficient estimators are extremely rare. In fact, it was proved that efficient estimation is possible only in an
exponential family, and only for the natural parameters of that family. • This notion of efficiency is sometimes restricted to the class of
unbiased estimators. (Often it is not.) Since there are no good theoretical reasons to require that estimators are unbiased, this restriction is inconvenient. In fact, if we use
mean squared error as a selection criterion, many biased estimators will slightly outperform the “best” unbiased ones. For example, in
multivariate statistics for dimension three or more, the mean-unbiased estimator,
sample mean, is
inadmissible: Regardless of the outcome, its performance is worse than for example the
James–Stein estimator. • Finite-sample efficiency is based on the variance, as a criterion according to which the estimators are judged. A more general approach is to use
loss functions other than quadratic ones, in which case the finite-sample efficiency can no longer be formulated. As an example, among the models encountered in practice, efficient estimators exist for: the mean
μ of the
normal distribution (but not the variance
σ2), parameter
λ of the
Poisson distribution, the probability
p in the
binomial or
multinomial distribution. Consider the model of a
normal distribution with unknown mean but known variance: {{nowrap|1={
Pθ =
N(
θ,
σ2)
θ ∈
R }.}} The data consists of
n independent and identically distributed observations from this model: . We estimate the parameter
θ using the
sample mean of all observations: : T(X) = \frac1n \sum_{i=1}^n x_i\ . This estimator has mean
θ and variance of , which is equal to the reciprocal of the
Fisher information from the sample. Thus, the sample mean is a finite-sample efficient estimator for the mean of the normal distribution.
Asymptotic efficiency Asymptotic efficiency requires
Consistency (statistics), asymptotically normal distribution of the estimator, and an asymptotic variance-covariance matrix no worse than that of any other estimator.
Example: Median Consider a sample of size N drawn from a
normal distribution of mean \mu and unit
variance, i.e., X_n \sim \mathcal{N}(\mu, 1). The
sample mean, \overline{X}, of the sample X_1, X_2, \ldots, X_N, defined as : \overline{X} = \frac{1}{N} \sum_{n=1}^{N} X_n \sim \mathcal{N}\left(\mu, \frac{1}{N}\right). The variance of the mean, 1/
N (the square of the
standard error) is equal to the reciprocal of the
Fisher information from the sample and thus, by the
Cramér–Rao inequality, the sample mean is efficient in the sense that its efficiency is unity (100%). Now consider the
sample median, \widetilde{X}. This is an
unbiased and
consistent estimator for \mu. For large N the sample median is approximately
normally distributed with mean \mu and variance {\pi}/{2N}, :\widetilde{X} \sim \mathcal{N} \left(\mu, \frac \pi {2N}\right). The efficiency of the median for large N is thus : e\left(\widetilde{X}\right) = \left(\frac 1 N\right) \left(\frac \pi {2N} \right)^{-1} = 2/\pi \approx 0.64. In other words, the relative variance of the median will be \pi/2 \approx 1.57, or 57% greater than the variance of the mean – the standard error of the median will be 25% greater than that of the mean. Note that this is the
asymptotic efficiency — that is, the efficiency in the limit as sample size N tends to infinity. For finite values of N, the efficiency is higher than this (for example, a sample size of 3 gives an efficiency of about 74%). The sample mean is thus more efficient than the sample median in this example. However, there may be measures by which the median performs better. For example, the median is far more robust to
outliers, so that if the Gaussian model is questionable or approximate, there may advantages to using the median (see
Robust statistics).
Dominant estimators If T_1 and T_2 are estimators for the parameter \theta, then T_1 is said to
dominate T_2 if: • its
mean squared error (MSE) is smaller for at least some value of \theta • the MSE does not exceed that of T_2 for any value of θ. Formally, T_1 dominates T_2 if : \operatorname{E} [ (T_1 - \theta)^2 ] \leq \operatorname{E} [ (T_2-\theta)^2 ] holds for all \theta, with strict inequality holding somewhere.
Relative efficiency The relative efficiency of two unbiased estimators is defined as : e(T_1,T_2) = \frac {\operatorname{E} [ (T_2-\theta)^2 ]} {\operatorname{E} [ (T_1-\theta)^2 ]} = \frac{\operatorname{var}(T_2)}{\operatorname{var}(T_1)} Although e is in general a function of \theta, in many cases the dependence drops out; if this is so, e being greater than one would indicate that T_1 is preferable, regardless of the true value of \theta. An alternative to relative efficiency for comparing estimators, is the
Pitman closeness criterion. This replaces the comparison of mean-squared-errors with comparing how often one estimator produces estimates closer to the true value than another estimator.
Estimators of the mean of u.i.d. variables In estimating the mean of uncorrelated, identically distributed variables we can take advantage of the fact that
the variance of the sum is the sum of the variances. In this case efficiency can be defined as the square of the
coefficient of variation, i.e., : e \equiv \left(\frac{\sigma }{\mu} \right)^2 Relative efficiency of two such estimators can thus be interpreted as the relative sample size of one required to achieve the certainty of the other. Proof: : \frac{e_1}{e_2} = \frac{s_1^2}{s_2^2}. Now because s_1^2 = n_1 \sigma^2, \, s_2^2 = n_2 \sigma^2 we have \frac{e_1}{e_2} = \frac{n_1}{n_2}, so the relative efficiency expresses the relative sample size of the first estimator needed to match the variance of the second.
Robustness Efficiency of an estimator may change significantly if the distribution changes, often dropping. This is one of the motivations of
robust statistics – an estimator such as the sample mean is an efficient estimator of the population mean of a normal distribution, for example, but can be an inefficient estimator of a
mixture distribution of two normal distributions with the same mean and different variances. For example, if a distribution is a combination of 98%
N(
μ, σ) and 2%
N(
μ, 10
σ), the presence of extreme values from the latter distribution (often "contaminating outliers") significantly reduces the efficiency of the sample mean as an estimator of
μ. By contrast, the
trimmed mean is less efficient for a normal distribution, but is more robust (i.e., less affected) by changes in the distribution, and thus may be more efficient for a mixture distribution. Similarly, the
shape of a distribution, such as
skewness or
heavy tails, can significantly reduce the efficiency of estimators that assume a symmetric distribution or thin tails.
Efficiency in statistics Efficiency in statistics is important because it allows the performance of various estimators to be compared. Although an unbiased estimator is usually favored over a biased one, a more efficient biased estimator can sometimes be more valuable than a less efficient unbiased estimator. For example, this can occur when the values of the biased estimator gathers around a number closer to the true value. Thus, estimator performance can be predicted easily by comparing their mean squared errors or variances.
Uses of inefficient estimators While efficiency is a desirable quality of an estimator, it must be weighed against other considerations, and an estimator that is efficient for certain distributions may well be inefficient for other distributions. Most significantly, estimators that are efficient for clean data from a simple distribution, such as the normal distribution (which is symmetric, unimodal, and has thin tails) may not be robust to contamination by outliers, and may be inefficient for more complicated distributions. In
robust statistics, more importance is placed on robustness and applicability to a wide variety of distributions, rather than efficiency on a single distribution.
M-estimators are a general class of estimators motivated by these concerns. They can be designed to yield both robustness and high relative efficiency, though possibly lower efficiency than traditional estimators for some cases. They can be very computationally complicated, however. A more traditional alternative are
L-estimators, which are very simple statistics that are easy to compute and interpret, in many cases robust, and often sufficiently efficient for initial estimates. See
applications of L-estimators for further discussion. Inefficient statistics in this sense are discussed in detail in
The Atomic Nucleus by R. D. Evans, written before the advent of computers, when efficiently estimating even the arithmetic mean of a sorted series of measurements was laborious. ==Hypothesis tests==