Scott's Pi A similar statistic, called
pi, was proposed by Scott (1955). Cohen's kappa and
Scott's pi differ in terms of how is calculated.
Fleiss' kappa Note that Cohen's kappa measures agreement between
two raters only. For a similar measure of agreement (
Fleiss' kappa) used when there are more than two raters, see
Fleiss (1971). The Fleiss kappa, however, is a multi-rater generalization of
Scott's pi statistic, not Cohen's kappa. Kappa is also used to compare performance in
machine learning, but the directional version known as
Informedness or
Youden's J statistic is argued to be more appropriate for supervised learning.
Weighted kappa The weighted kappa allows disagreements to be weighted differently{{cite journal The equation for weighted κ is: : \kappa = 1 - \frac{\sum_{i=1}^{k} \sum_{j=1}^{k} w_{ij} x_{ij}} {\sum_{i=1}^{k} \sum_{j=1}^{k} w_{ij} m_{ij}} where
k=number of codes and w_{ij}, x_{ij}, and m_{ij} are elements in the weight, observed, and expected matrices, respectively. When diagonal cells contain weights of 0 and all off-diagonal cells weights of 1, this formula produces the same value of kappa as the calculation given above. ==See also==