The Shapiro–Wilk test tests the
null hypothesis that a
sample x1, ...,
xn came from a
normally distributed population. The
test statistic is W = \frac{{\left(\sum\limits_{i=1}^n a_i x_{(i)}\right)}^2}{\sum\limits_{i=1}^n {\left(x_i-\overline{x}\right)}^2}, where • x_{(i)} with parentheses enclosing the subscript index
i is the
ith
order statistic, i.e., the
ith-smallest number in the sample (not to be confused with x_i). • \overline{x} = \left( x_1 + \cdots + x_n \right) / n is the sample mean. The coefficients a_i are given by: (a_1,\dots,a_n) = {m^{\mathsf{T}} V^{-1} \over C}, where
C is a
vector norm: C = \left\| V^{-1} m \right\| = {\left(m^{\mathsf{T}} V^{-1} V^{-1} m\right)}^{1/2} and the vector
m, m = (m_1,\dots,m_n)^{\mathsf{T}} is made of the
expected values of the
order statistics of
independent and identically distributed random variables sampled from the standard normal distribution; finally, V is the
covariance matrix of those normal order statistics. There is no name for the distribution of W. The cutoff values for the statistics are calculated through
Monte Carlo simulations. ==Interpretation==