Mediant When the weights are c and d, the weighted arithmetic average of two fractions a/c and b/d is equal to their
mediant.
Weighted sample variance Typically when a mean is calculated it is important to know the
variance and
standard deviation about that mean. When a weighted mean \mu^* is used, the variance of the weighted sample is different from the variance of the unweighted sample. The
biased weighted
sample variance \hat \sigma^2_\mathrm{w} is defined similarly to the normal
biased sample variance \hat \sigma^2: : \begin{align} \hat \sigma^2\ &= \frac{\sum\limits_{i=1}^N \left(x_i - \mu\right)^2} N \\ \hat \sigma^2_\mathrm{w} &= \frac{\sum\limits_{i=1}^N w_i \left(x_i - \mu^{*}\right)^2 }{\sum_{i=1}^N w_i} \end{align} where \sum_{i=1}^N w_i = 1 for normalized weights. If the weights are
frequency weights (and thus are random variables), it can be shown that \hat \sigma^2_\mathrm{w} is the maximum likelihood estimator of \sigma^2 for
iid Gaussian observations. For small samples, it is customary to use an
unbiased estimator for the population variance. In normal unweighted samples, the
N in the denominator (corresponding to the sample size) is changed to
N − 1 (see
Bessel's correction). In the weighted setting, there are actually two different unbiased estimators, one for the case of
frequency weights and another for the case of
reliability weights.
Frequency weights If the weights are
frequency weights (where a weight equals the number of occurrences), then the unbiased estimator is: : s^2\ = \frac {\sum\limits_{i=1}^N w_i \left(x_i - \mu^*\right)^2} {\sum_{i=1}^N w_i - 1} This effectively applies Bessel's correction for frequency weights. For example, if values \{2, 2, 4, 5, 5, 5\} are drawn from the same distribution, then we can treat this set as an unweighted sample, or we can treat it as the weighted sample \{2, 4, 5\} with corresponding weights \{2, 1, 3\}, and we get the same result either way. If the frequency weights \{w_i\} are normalized to 1, then the correct expression after Bessel's correction becomes :s^2\ = \frac {\sum_{i=1}^N w_i} {\sum_{i=1}^N w_i - 1}\sum_{i=1}^N w_i \left(x_i - \mu^*\right)^2 where the total number of samples is \sum_{i=1}^N w_i (not N). In any case, the information on total number of samples is necessary in order to obtain an unbiased correction, even if w_i has a different meaning other than frequency weight. The estimator can be unbiased only if the weights are not
standardized nor
normalized, these processes changing the data's mean and variance and thus leading to a
loss of the base rate (the population count, which is a requirement for Bessel's correction).
Reliability weights If the weights are instead
reliability weights (non-random values reflecting the sample's relative trustworthiness, often derived from sample variance), we can determine a correction factor to yield an unbiased estimator. Assuming each random variable is sampled from the same distribution with mean \mu and actual variance \sigma_{\text{actual}}^2, taking expectations we have, : \begin{align} \operatorname{E} [\hat \sigma^2] &= \frac{ \sum\limits_{i=1}^N \operatorname{E} [(x_i - \mu)^2]} N \\ &= \operatorname{E} [(X - \operatorname{E}[X])^2] - \frac{1}{N} \operatorname{E} [(X - \operatorname{E}[X])^2] \\ &= \left( \frac{N - 1} N \right) \sigma_{\text{actual}}^2 \\ \operatorname{E} [\hat \sigma^2_\mathrm{w}] &= \frac{\sum\limits_{i=1}^N w_i \operatorname{E} [(x_i - \mu^*)^2] }{V_1} \\ &= \operatorname{E}[(X - \operatorname{E}[X])^2] - \frac{V_2}{V_1^2} \operatorname{E}[(X - \operatorname{E}[X])^2] \\ &= \left(1 - \frac{V_2 }{ V_1^2}\right) \sigma_{\text{actual}}^2 \end{align} where V_1 = \sum_{i=1}^N w_i and V_2 = \sum_{i=1}^N w_i^2. Therefore, the bias in our estimator is \left(1 - \frac{V_2 }{ V_1^2}\right) , analogous to the \left( \frac{N - 1} {N} \right) bias in the unweighted estimator (also notice that \ V_1^2 / V_2 = N_{eff} is the
effective sample size). This means that to unbias our estimator we need to pre-divide by 1 - \left(V_2 / V_1^2\right) , ensuring that the expected value of the estimated variance equals the actual variance of the
sampling distribution. The final unbiased estimate of sample variance is: : \begin{align} s^2_{\mathrm{w}}\ &= \frac{\hat \sigma^2_\mathrm{w}} {1 - (V_2 / V_1^2)} \\[4pt] &= \frac {\sum\limits_{i=1}^N w_i (x_i - \mu^*)^2} {V_1 - (V_2 / V_1)}, \end{align} where \operatorname{E}[s^2_{\mathrm{w}}] = \sigma_{\text{actual}}^2. The degrees of freedom of this weighted, unbiased sample variance vary accordingly from
N − 1 down to 0. The standard deviation is simply the square root of the variance above. As a side note, other approaches have been described to compute the weighted sample variance.
Weighted sample covariance In a weighted sample, each row vector \mathbf{x}_{i} (each set of single observations on each of the
K random variables) is assigned a weight w_i \geq0. Then the
weighted mean vector \mathbf{\mu^*} is given by : \mathbf{\mu^*}=\frac{\sum_{i=1}^N w_i \mathbf{x}_i}{\sum_{i=1}^N w_i}. And the weighted covariance matrix is given by: :\mathbf{C} = \frac {\sum_{i=1}^N w_i \left(\mathbf{x}_i - \mu^*\right)^T \left(\mathbf{x}_i - \mu^*\right)} {V_1}. Similarly to weighted sample variance, there are two different unbiased estimators depending on the type of the weights.
Frequency weights If the weights are
frequency weights, the
unbiased weighted estimate of the covariance matrix \textstyle \mathbf{C}, with Bessel's correction, is given by: : \begin{align} \mathbf{C} &= \frac{\sum_{i=1}^N w_i}{\left(\sum_{i=1}^N w_i\right)^2-\sum_{i=1}^N w_i^2} \sum_{i=1}^N w_i \left(\mathbf{x}_i - \mu^*\right)^T \left(\mathbf{x}_i - \mu^*\right) \\ &= \frac {\sum_{i=1}^N w_i \left(\mathbf{x}_i - \mu^*\right)^T \left(\mathbf{x}_i - \mu^*\right)} {V_1 - (V_2 / V_1)}. \end{align} The reasoning here is the same as in the previous section. Since we are assuming the weights are normalized, then V_1 = 1 and this reduces to: : \mathbf{C}=\frac{\sum_{i=1}^N w_i \left(\mathbf{x}_i - \mu^*\right)^T \left(\mathbf{x}_i - \mu^*\right)}{1-V_2}. If all weights are the same, i.e. w_{i} / V_1=1/N, then the weighted mean and covariance reduce to the unweighted sample mean and covariance above.
Vector-valued estimates The above generalizes easily to the case of taking the mean of vector-valued estimates. For example, estimates of position on a plane may have less certainty in one direction than another. As in the scalar case, the weighted mean of multiple estimates can provide a
maximum likelihood estimate. We simply replace the variance \sigma^2 by the
covariance matrix \mathbf{C} and the
arithmetic inverse by the
matrix inverse (both denoted in the same way, via superscripts); the weight matrix then reads: \mathbf{W}_i = \mathbf{C}_i^{-1}. The weighted mean in this case is: \bar{\mathbf{x}} = \mathbf{C}_{\bar{\mathbf{x}}} \left(\sum_{i=1}^n \mathbf{W}_i \mathbf{x}_i\right), (where the order of the
matrix–vector product is not
commutative), in terms of the covariance of the weighted mean: \mathbf{C}_{\bar{\mathbf{x}}} = \left(\sum_{i=1}^n \mathbf{W}_i\right)^{-1}, For example, consider the weighted mean of the point [1 0] with high variance in the second component and [0 1] with high variance in the first component. Then : \mathbf{x}_1 := \begin{bmatrix}1 & 0\end{bmatrix}^\top, \qquad \mathbf{C}_1 := \begin{bmatrix}1 & 0\\ 0 & 100\end{bmatrix} : \mathbf{x}_2 := \begin{bmatrix}0 & 1\end{bmatrix}^\top, \qquad \mathbf{C}_2 := \begin{bmatrix}100 & 0\\ 0 & 1\end{bmatrix} then the weighted mean is: : \begin{align} \bar{\mathbf{x}} & = \left(\mathbf{C}_1^{-1} + \mathbf{C}_2^{-1}\right)^{-1} \left(\mathbf{C}_1^{-1} \mathbf{x}_1 + \mathbf{C}_2^{-1} \mathbf{x}_2\right) \\[5pt] & =\begin{bmatrix} 0.9901 &0\\ 0& 0.9901\end{bmatrix}\begin{bmatrix}1\\1\end{bmatrix} = \begin{bmatrix}0.9901 \\ 0.9901\end{bmatrix} \end{align} which makes sense: the [1 0] estimate is "compliant" in the second component and the [0 1] estimate is compliant in the first component, so the weighted mean is nearly [1 1].
Accounting for correlations In the general case, suppose that \mathbf{X}=[x_1,\dots,x_n]^T, \mathbf{C} is the
covariance matrix relating the quantities x_i, \bar{x} is the common mean to be estimated, and \mathbf{J} is a
design matrix equal to a
vector of ones [1, \dots, 1]^T (of length n). The
Gauss–Markov theorem states that the estimate of the mean having minimum variance is given by: :\sigma^2_\bar{x}=(\mathbf{J}^T \mathbf{W} \mathbf{J})^{-1}, and :\bar{x} = \sigma^2_\bar{x} (\mathbf{J}^T \mathbf{W} \mathbf{X}), where: :\mathbf{W} = \mathbf{C}^{-1}.
Decreasing strength of interactions Consider the
time series of an independent variable x and a dependent variable y, with n observations sampled at discrete times t_i. In many common situations, the value of y at time t_i depends not only on x_i but also on its past values. Commonly, the strength of this dependence decreases as the separation of observations in time increases. To model this situation, one may replace the independent variable by its sliding mean z for a window size m. :z_k=\sum_{i=1}^m w_i x_{k+1-i}.
Exponentially decreasing weights In the scenario described in the previous section, most frequently the decrease in interaction strength obeys a negative exponential law. If the observations are sampled at equidistant times, then exponential decrease is equivalent to decrease by a constant fraction 0 at each time step. Setting w=1-\Delta we can define m normalized weights by : w_i=\frac {w^{i-1}}{V_1}, where V_1 is the sum of the unnormalized weights. In this case V_1 is simply : V_1=\sum_{i=1}^m{w^{i-1}} = \frac {1-w^{m}}{1-w}, approaching V_1=1/(1-w) for large values of m. The damping constant w must correspond to the actual decrease of interaction strength. If this cannot be determined from theoretical considerations, then the following properties of exponentially decreasing weights are useful in making a suitable choice: at step (1-w)^{-1}, the weight approximately equals {e^{-1}}(1-w)=0.39(1-w), the tail area the value e^{-1}, the head area {1-e^{-1}}=0.61. The tail area at step n is \le {e^{-n(1-w)}}. Where primarily the closest n observations matter and the effect of the remaining observations can be ignored safely, then choose w such that the tail area is sufficiently small.
Weighted averages of functions The concept of weighted average can be extended to functions. Weighted averages of functions play an important role in the systems of weighted differential and integral calculus.
Correcting for over- or under-dispersion Weighted means are typically used to find the weighted mean of historical data, rather than theoretically generated data. In this case, there will be some error in the variance of each data point. Typically experimental errors may be underestimated due to the experimenter not taking into account all sources of error in calculating the variance of each data point. In this event, the variance in the weighted mean must be corrected to account for the fact that \chi^2 is too large. The correction that must be made is :\hat{\sigma}_{\bar{x}}^2 = \sigma_{\bar{x}}^2 \chi^2_\nu where \chi^2_\nu is the
reduced chi-squared: :\chi^2_\nu = \frac{1}{(n-1)} \sum_{i=1}^n \frac{ (x_i - \bar{x} )^2}{ \sigma_i^2 }; The square root \hat{\sigma}_{\bar{x}} can be called the
standard error of the weighted mean (variance weights, scale corrected). When all data variances are equal, \sigma_i = \sigma_0, they cancel out in the weighted mean variance, \sigma_{\bar{x}}^2, which again reduces to the
standard error of the mean (squared), \sigma_{\bar{x}}^2 = \sigma^2/n, formulated in terms of the
sample standard deviation (squared), :\sigma^2 = \frac {\sum_{i=1}^n (x_i - \bar{x} )^2} {n-1}. ==See also==