Background In
statistical inference, there are several approaches to
estimation theory that can be used to decide immediately what estimators should be used according to those approaches. For example, ideas from
Bayesian inference would lead directly to
Bayesian estimators. Similarly, the theory of classical statistical inference can sometimes lead to strong conclusions about what estimator should be used. However, the usefulness of these theories depends on having a fully prescribed
statistical model and may also depend on having a relevant loss function to determine the estimator. Thus a
Bayesian analysis might be undertaken, leading to a posterior distribution for relevant parameters, but the use of a specific utility or loss function may be unclear. Ideas of invariance can then be applied to the task of summarising the posterior distribution. In other cases, statistical analyses are undertaken without a fully defined statistical model or the classical theory of statistical inference cannot be readily applied because the family of models being considered are not amenable to such treatment. In addition to these cases where general theory does not prescribe an estimator, the concept of invariance of an estimator can be applied when seeking estimators of alternative forms, either for the sake of simplicity of application of the estimator or so that the estimator is
robust. The concept of invariance is sometimes used on its own as a way of choosing between estimators, but this is not necessarily definitive. For example, a requirement of invariance may be incompatible with the requirement that the
estimator be mean-unbiased; on the other hand, the criterion of
median-unbiasedness is defined in terms of the estimator's
sampling distribution and so is invariant under many transformations. One use of the concept of invariance is where a class or family of estimators is proposed and a particular formulation must be selected amongst these. One procedure is to impose relevant invariance properties and then to find the formulation within this class that has the best properties, leading to what is called the optimal invariant estimator.
Some classes of invariant estimators There are several types of transformations that are usefully considered when dealing with invariant estimators. Each gives rise to a class of estimators which are invariant to those particular types of transformation. • Shift invariance: Notionally, estimates of a
location parameter should be invariant to simple shifts of the data values. If all data values are increased by a given amount, the estimate should change by the same amount. When considering estimation using a
weighted average, this invariance requirement immediately implies that the weights should sum to one. While the same result is often derived from a requirement for unbiasedness, the use of "invariance" does not require that a mean value exists and makes no use of any probability distribution at all. • Scale invariance: Note that this topic about the invariance of the estimator scale parameter not to be confused with the more general
scale invariance about the behavior of systems under aggregate properties (in physics). • Parameter-transformation invariance: Here, the transformation applies to the parameters alone. The concept here is that essentially the same inference should be made from data and a model involving a parameter θ as would be made from the same data if the model used a parameter φ, where φ is a one-to-one transformation of θ, φ=
h(θ). According to this type of invariance, results from transformation-invariant estimators should also be related by φ=
h(θ).
Maximum likelihood estimators have this property when the transformation is
monotonic. Though the asymptotic properties of the estimator might be invariant, the small sample properties can be different, and a specific distribution needs to be derived. • Permutation invariance: Where a set of data values can be represented by a statistical model that they are outcomes from
independent and identically distributed random variables, it is reasonable to impose the requirement that any estimator of any property of the common distribution should be permutation-invariant: specifically that the estimator, considered as a function of the set of data-values, should not change if items of data are swapped within the dataset. The combination of permutation invariance and location invariance for estimating a location parameter from an
independent and identically distributed dataset using a weighted average implies that the weights should be identical and sum to one. Of course, estimators other than a weighted average may be preferable.
Optimal invariant estimators Under this setting, we are given a set of measurements x which contains information about an unknown parameter \theta. The measurements x are modelled as a
vector random variable having a
probability density function f(x|\theta) which depends on a parameter vector \theta. The problem is to estimate \theta given x. The estimate, denoted by a, is a function of the measurements and belongs to a set A. The quality of the result is defined by a
loss function L=L(a,\theta) which determines a
risk function R=R(a,\theta)=E[L(a,\theta)|\theta]. The sets of possible values of x, \theta, and a are denoted by X, \Theta, and A, respectively.
In classification In
statistical classification, the rule which assigns a class to a new data-item can be considered to be a special type of estimator. A number of invariance-type considerations can be brought to bear in formulating
prior knowledge for pattern recognition. ==Mathematical setting==