Pearson's product-moment coefficient The most familiar measure of dependence between two quantities is the Pearson product-moment correlation coefficient, most commonly called 'Pearson's correlation coefficient' or simply 'the correlation coefficient' (since it's the most known correlation coefficient). It is obtained by taking the ratio of the
covariance between two variables of a numerical dataset normalized to the square root of their variances. Equivalently, Pearson's correlation coefficient can be calculated by dividing the covariance of the two variables by the product of their
standard deviations.
Karl Pearson developed the coefficient from a similar idea by
Francis Galton. A Pearson product-moment correlation coefficient attempts to establish a line of best fit through a dataset of two variables by essentially laying out the expected values and the resulting Pearson's correlation coefficient indicates how far away the actual dataset is from the expected values. Depending on the sign of our Pearson's correlation coefficient, we can end up with either a negative or positive correlation if there is any sort of relationship between the variables of our data set. The population correlation coefficient \rho_{X,Y} between two
random variables X and Y with
expected values \mu_X and \mu_Y and
standard deviations \sigma_X and \sigma_Y is defined as: \rho_{X,Y} = \operatorname{corr}(X,Y) = {\operatorname{cov}(X,Y) \over \sigma_X \sigma_Y} = {\operatorname{E}[(X-\mu_X)(Y-\mu_Y)] \over \sigma_X\sigma_Y}, \quad \text{if}\ \sigma_{X}\sigma_{Y}>0. where \operatorname{E} is the
expected value operator, \operatorname{cov} means
covariance, and \operatorname{corr} is a widely used alternative notation for the correlation coefficient. The Pearson correlation is defined only if both standard deviations are finite and positive. An alternative formula purely in terms of
moments is: \rho_{X,Y} = {\operatorname{E}(XY)-\operatorname{E}(X)\operatorname{E}(Y)\over \sqrt{\operatorname{E}(X^2)-\operatorname{E}(X)^2}\cdot \sqrt{\operatorname{E}(Y^2)-\operatorname{E}(Y)^2} }
Correlation and independence It is a corollary of the
Cauchy–Schwarz inequality that the
absolute value of the Pearson correlation coefficient is not bigger than 1. Therefore, the value of a correlation coefficient ranges between −1 and +1. The correlation coefficient is +1 in the case of a perfect direct (increasing) linear relationship (correlation), −1 in the case of a perfect inverse (decreasing) linear relationship (
anti-correlation), and some value in the
open interval (-1,1) in all other cases, indicating the degree of
linear dependence between the variables. As it approaches zero there is less of a relationship (closer to uncorrelated). The closer the coefficient is to either −1 or 1, the stronger the correlation between the variables. If the variables are
independent, Pearson's correlation coefficient is 0. However, because the correlation coefficient detects only linear dependencies between two variables, the converse is not necessarily true. A correlation coefficient of 0 does not imply that the variables are independent. \begin{align} X,Y \text{ independent} \quad & \Rightarrow \quad \rho_{X,Y} = 0 \quad (X,Y \text{ uncorrelated})\\ \rho_{X,Y} = 0 \quad (X,Y \text{ uncorrelated})\quad & \nRightarrow \quad X,Y \text{ independent} \end{align} For example, suppose the random variable X is symmetrically distributed about zero, and Y=X^2. Then Y is completely determined by X, so that X and Y are perfectly dependent, but their correlation is zero; they are
uncorrelated. However, in the special case when X and Y are
jointly normal, uncorrelatedness is equivalent to independence. Even though uncorrelated data does not necessarily imply independence, one can check if random variables are independent if their
mutual information is 0.
Sample correlation coefficient Given a series of n measurements of the pair (X_i,Y_i) indexed by i=1,\ldots,n, the
sample correlation coefficient can be used to estimate the population Pearson correlation \rho_{X,Y} between X and Y. The sample correlation coefficient is defined as : r_{xy} \quad \overset{\underset{\mathrm{def}}{}}{=} \quad \frac{\sum\limits_{i=1}^n (x_i-\bar{x})(y_i-\bar{y})}{(n-1)s_x s_y} =\frac{\sum\limits_{i=1}^n (x_i-\bar{x})(y_i-\bar{y})} {\sqrt{\sum\limits_{i=1}^n (x_i-\bar{x})^2 \sum\limits_{i=1}^n (y_i-\bar{y})^2}}, where \overline{x} and \overline{y} are the sample
means of X and Y, and s_x and s_y are the
corrected sample standard deviations of X and Y. Equivalent expressions for r_{xy} are : \begin{align} r_{xy} &=\frac{\sum x_iy_i-n \bar{x} \bar{y}}{n s'_x s'_y} \\[5pt] &=\frac{n\sum x_iy_i-\sum x_i\sum y_i}{\sqrt{n\sum x_i^2-(\sum x_i)^2}~\sqrt{n\sum y_i^2-(\sum y_i)^2}}. \end{align} where s'_x and s'_y are the
uncorrected sample standard deviations of X and Y. If x and y are results of measurements that contain measurement error, the realistic limits on the correlation coefficient are not −1 to +1 but a smaller range. For the case of a linear model with a single independent variable, the
coefficient of determination (R squared) is the square of r_{xy}, Pearson's product-moment coefficient.
Example Consider the
joint probability distribution of and given in the table below. : For this joint distribution, the
marginal distributions are: :\mathrm{P}(X=x)= \begin{cases} \frac 1 3 & \quad \text{for } x=0 \\ \frac 2 3 & \quad \text{for } x=1 \end{cases} :\mathrm{P}(Y=y)= \begin{cases} \frac 1 3 & \quad \text{for } y=-1 \\ \frac 1 3 & \quad \text{for } y=0 \\ \frac 1 3 & \quad \text{for } y=1 \end{cases} This yields the following expectations and variances: :\mu_X = \frac 2 3 :\mu_Y = 0 :\sigma_X^2 = \frac 2 9 :\sigma_Y^2 = \frac 2 3 Therefore: : \begin{align} \rho_{X,Y} & = \frac{1}{\sigma_X \sigma_Y} \mathrm{E}[(X-\mu_X)(Y-\mu_Y)] \\[5pt] & = \frac{1}{\sigma_X \sigma_Y} \sum_{x,y}{(x-\mu_X)(y-\mu_Y) \mathrm{P}(X=x,Y=y)} \\[5pt] & = \frac{3\sqrt{3}}{2}\left(\left(1-\frac 2 3\right)(-1-0)\frac{1}{3} + \left(0-\frac 2 3\right)(0-0)\frac{1}{3} + \left(1-\frac 2 3\right)(1-0)\frac{1}{3}\right) = 0. \end{align}
Rank correlation coefficients Rank correlation coefficients, such as
Spearman's rank correlation coefficient and
Kendall's rank correlation coefficient (τ) measure the extent to which, as one variable increases, the other variable tends to increase, without requiring that increase to be represented by a linear relationship. If, as the one variable increases, the other
decreases, the rank correlation coefficients will be negative. It is common to regard these rank correlation coefficients as alternatives to Pearson's coefficient, used either to reduce the amount of calculation or to make the coefficient less sensitive to non-normality in distributions. However, this view has little mathematical basis, as rank correlation coefficients measure a different type of relationship than the
Pearson product-moment correlation coefficient, and are best seen as measures of a different type of association, rather than as an alternative measure of the population correlation coefficient. To illustrate the nature of rank correlation, and its difference from linear correlation, consider the following four pairs of numbers (x,y): :(0, 1), (10, 100), (101, 500), (102, 2000). As we go from each pair to the next pair x increases, and so does y. This relationship is perfect, in the sense that an increase in x is
always accompanied by an increase in y. This means that we have a perfect rank correlation, and both Spearman's and Kendall's correlation coefficients are 1, whereas in this example Pearson product-moment correlation coefficient is 0.7544, indicating that the points are far from lying on a straight line. In the same way if y always
decreases when x
increases, the rank correlation coefficients will be −1, while the Pearson product-moment correlation coefficient may or may not be close to −1, depending on how close the points are to a straight line. Although in the extreme cases of perfect rank correlation the two coefficients are both equal (being both +1 or both −1), this is not generally the case, and so values of the two coefficients cannot meaningfully be compared. For example, for the three pairs (1, 1) (2, 3) (3, 2) Spearman's coefficient is 1/2, while Kendall's coefficient is 1/3. ==Common misconceptions==