MarketPhi coefficient
Company Profile

Phi coefficient

In statistics, the phi coefficient, also known as the mean square contingency coefficient or Yule coefficient of correlation and commonly denoted by φ or rφ, is a measure of association between two binary variables. In machine learning and bioinformatics, it is known as the Matthews correlation coefficient (MCC). In meteorology and elsewhere, it is referred to as the Doolittle Measure of Association or Doolittle Skill Score. Described by Udny Yule in 1912 and given the name phi by Karl Pearson in the 1930s, it is a special case of the Pearson correlation coefficient.

Definition
A Pearson correlation coefficient estimated for two binary variables will return the phi coefficient. Two binary variables are considered positively associated if most of the data falls along the diagonal cells. In contrast, two binary variables are considered negatively associated if most of the data falls off the diagonal. If we have a 2×2 table for two random variables x and y where n11, n10, n01, n00, are non-negative counts of numbers of observations that sum to n, the total number of observations. The phi coefficient that describes the association of x and y is : \varphi = \frac{n_{11}n_{00}-n_{10}n_{01}}{\sqrt{n_{1\bullet}n_{0\bullet}n_{\bullet0}n_{\bullet1}}}. Phi is related to the point-biserial correlation coefficient and Cohen's d and estimates the extent of the relationship between two variables (2×2). The phi coefficient can also be expressed using only n, n_{11}, n_{1\bullet}, and n_{\bullet1}, as : \varphi = \frac{nn_{11}-n_{1\bullet}n_{\bullet1}}{\sqrt{n_{1\bullet}n_{\bullet1}(n-n_{1\bullet})(n-n_{\bullet1})}}. == Maximum values ==
Maximum values
In general, the Pearson correlation coefficient ranges from −1 to +1, where ±1 indicates perfect agreement or disagreement, and 0 indicates no relationship. The range of the phi coefficient—a special case of the Pearson correlation coefficient—is more tightly bound when either of the binary variables are class-imbalanced. == Machine learning ==
Machine learning
The Matthews correlation coefficient (MCC) is widely used in the fields of bioinformatics and machine learning to evaluate the quality of binary (two-class) classifications. It is named for biochemist Brian W. Matthews, who described the measure in a foundational 1975 paper. An equivalent quantity, the Doolittle Measure of Association or Doolittle Skill Score, was used by M. H. Doolittle in the 1880s to rate the accuracy of meteorologist John Park Finley's tornado predictions and other weather forecasts. The coefficient accounts for true and false positives and negatives and is generally regarded as a balanced measure which can be used even if the classes are of very different sizes. The MCC is in essence a correlation coefficient between the observed and predicted binary classifications; it returns a value between −1 and +1. A coefficient of +1 represents a perfect prediction, 0 no better than random prediction and −1 indicates total disagreement between prediction and observation. However, if MCC equals neither −1, 0, or +1, it is not a reliable indicator of how similar a predictor is to random guessing because MCC is dependent on the dataset. MCC is closely related to the chi-square statistic for a 2×2 contingency table : |\text{MCC}| = \sqrt{\frac{\chi^2}{n}} where n is the total number of observations. While there is no perfect way of describing the confusion matrix of true and false positives and negatives by a single number, the Matthews correlation coefficient is generally regarded as being one of the best such measures. Other measures, such as the proportion of correct predictions (also termed accuracy), are not useful when the two classes are of very different sizes. For example, assigning every object to the larger set achieves a high proportion of correct predictions, but is not generally a useful classification. The MCC can be calculated directly from the confusion matrix using the formula: : \text{MCC} = \frac{ \mathit{TP} \times \mathit{TN} - \mathit{FP} \times \mathit{FN} } {\sqrt{ (\mathit{TP} + \mathit{FP}) ( \mathit{TP} + \mathit{FN} ) ( \mathit{TN} + \mathit{FP} ) ( \mathit{TN} + \mathit{FN} ) } } In this equation, TP is the number of true positives, TN the number of true negatives, FP the number of false positives and FN the number of false negatives. If exactly one of the four sums in the denominator is zero, the denominator can be arbitrarily set to one; this results in a Matthews correlation coefficient of zero, which can be shown to be the correct limiting value. In case two or more sums are zero (e.g. both labels and model predictions are all positive or negative), the limit does not exist. The MCC can be calculated with the formula: : \text{MCC} = \sqrt{\mathit{PPV} \times \mathit{TPR} \times \mathit{TNR} \times \mathit{NPV}} - \sqrt{\mathit{FDR} \times \mathit{FNR} \times \mathit{FPR} \times \mathit{FOR}} using the positive predictive value, the true positive rate, the true negative rate, the negative predictive value, the false discovery rate, the false negative rate, the false positive rate, and the false omission rate. The original formula as given by Matthews was: Markedness and informedness correspond to different directions of information flow and generalize Youden's J statistic, the \delta p statistics, while their geometric mean generalizes the Matthews correlation coefficient to more than two classes. == Example ==
Example
Given a sample of 12 pictures, 8 of cats and 4 of dogs, where cats belong to class 1 and dogs belong to class 0, :actual = [1,1,1,1,1,1,1,1,0,0,0,0], assume that a classifier that distinguishes between cats and dogs is trained, and we take the 12 pictures and run them through the classifier, and the classifier makes 9 accurate predictions and misses 3: 2 cats wrongly predicted as dogs (first 2 predictions) and 1 dog wrongly predicted as a cat (last prediction). :prediction = [0,0,1,1,1,1,1,1,0,0,0,1] With these two labelled sets (actual and predictions) we can create a confusion matrix that will summarize the results of testing the classifier: In this confusion matrix, of the 8 cat pictures, the system judged that 2 were dogs, and of the 4 dog pictures, it predicted that 1 was a cat. All correct predictions are located in the diagonal of the table (highlighted in bold), so it is easy to visually inspect the table for prediction errors, as they will be represented by values outside the diagonal. In abstract terms, the confusion matrix is as follows: where P = positive; N = negative; TP = truepositive; FP = false positive; TN = true negative; FN = false negative. Plugging the numbers from the formula: :\text{MCC} = \frac{6 \times 3 - 1 \times 2}{\sqrt{(6 + 1) \times (6 + 2) \times (3 + 1) \times (3 + 2)}} = \frac{16}{\sqrt{1120}} \approx 0.478 == Confusion matrix ==
Confusion matrix
Let us define an experiment from P positive instances and N negative instances for some condition. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as follows: == Multiclass case ==
Multiclass case
The Matthews correlation coefficient has been generalized to the multiclass case. The generalization called the R_K statistic (for K different classes) was defined in terms of a K\times K confusion matrix C . :\text{MCC} = \frac{\sum_{k}\sum_{l}\sum_{m} C_{kk}C_{lm} - C_{kl}C_{mk}}{ \sqrt{\sum_{k}\left(\sum_l C_{kl}\right)\left(\sum_{k' | k' \neq k}\sum_{l'} C_{k'l'}\right)} \sqrt{\sum_{k}\left(\sum_l C_{lk}\right)\left(\sum_{k' | k' \neq k}\sum_{l'} C_{l'k'}\right)} } When there are more than two classes the MCC will no longer range between −1 and +1. Instead the minimum value will be between −1 and 0 depending on the true distribution. The maximum value is always +1. This formula can be more easily understood by defining intermediate variables: • i is the actual value index • j is the predicted value index • K is the total number of classes • t_k = \sum_i C_{ik} the number of times class k truly occurred, • p_k = \sum_j C_{kj} the number of times class k was predicted, • c = \sum_{k} C_{kk} the total number of samples correctly predicted, • s = \sum_i \sum_j C_{ij} the total number of samples. This allows the formula to be expressed as: :\text{MCC} = \frac{cs - \vec{t} \cdot \vec{p}}{ \sqrt{s^2 - \vec{p} \cdot \vec{p}} \sqrt{s^2 - \vec{t} \cdot \vec{t}} } Using above formula to compute MCC measure for the dog and cat example discussed above, where the confusion matrix is treated as a 2 × Multiclass example: :\text{MCC}=\frac{(6+3)\times{\color{green}12}\;-\;{\color{blue}5}\times{\color{brown}4}\;-\;{\color{purple}7}\times{\color{maroon}8}}{\sqrt{{\color{green}12}^2-{\color{blue}5}^2-{\color{purple}7}^2}\sqrt{{\color{green}12}^2-{\color{brown}4}^2-{\color{maroon}8}^2}}=\frac{32}{\sqrt{4480}}\approx 0.478 An alternative generalization of the Matthews Correlation Coefficient to more than two classes was given by Powers == See also ==
tickerdossier.comtickerdossier.substack.com