Positive predictive value (PPV) The positive predictive value (PPV), or
precision, is defined as :: \text{PPV} = \frac{\text{Number of true positives}}{\text{Number of true positives} + \text{Number of false positives}} = \frac{\text{Number of true positives}}{\text{Number of positive calls}} where a "
true positive" is the event that the test makes a positive prediction, and the subject has a positive result under the
gold standard, and a "
false positive" is the event that the test makes a positive prediction, and the subject has a negative result under the gold standard. The ideal value of the PPV, with a perfect test, is 1 (100%), and the worst possible value would be zero. The PPV can also be computed from
sensitivity,
specificity, and the
prevalence of the condition: :: \text{PPV} = \frac{\text{sensitivity} \times \text{prevalence}}{\text{sensitivity} \times \text{prevalence} + (1 - \text{specificity}) \times (1 - \text{prevalence})} cf. Bayes' theorem The complement of the PPV is the
false discovery rate (FDR): :: \text{FDR} = 1 - \text{PPV} = \frac{\text{Number of false positives}}{\text{Number of true positives} + \text{Number of false positives}} = \frac{\text{Number of false positives}}{\text{Number of positive calls}}
Negative predictive value (NPV) The negative predictive value is defined as: :: \text{NPV} = \frac{\text{Number of true negatives}}{\text{Number of true negatives}+\text{Number of false negatives}} = \frac{\text{Number of true negatives}}{\text{Number of negative calls}} where a "
true negative" is the event that the test makes a negative prediction, and the subject has a negative result under the gold standard, and a "
false negative" is the event that the test makes a negative prediction, and the subject has a positive result under the gold standard. With a perfect test, one which returns no false negatives, the value of the NPV is 1 (100%), and with a test which returns no true negatives the NPV value is zero. The NPV can also be computed from
sensitivity,
specificity, and
prevalence: :: \text{NPV} = \frac{\text{specificity} \times (1-\text{prevalence})}{\text{specificity} \times (1-\text{prevalence}) + (1-\text{sensitivity}) \times \text{prevalence}} :: \text{NPV} = \frac{TN}{TN + FN} The complement of the NPV is the '''''' (FOR): :: \text{FOR} = 1 - \text{NPV} = \frac{\text{Number of false negatives}}{\text{Number of true negatives}+\text{Number of false negatives}} = \frac{\text{Number of false negatives}}{\text{Number of negative calls}} Although sometimes used synonymously, a
negative predictive value generally refers to what is established by control groups, while a negative
post-test probability rather refers to a probability for an individual. Still, if the individual's
pre-test probability of the target condition is the same as the prevalence in the control group used to establish the negative predictive value, then the two are numerically equal.
Relationship The following diagram illustrates how the
positive predictive value,
negative predictive value,
sensitivity, and specificity are related. Note that the positive and negative predictive values can only be estimated using data from a
cross-sectional study or other population-based study in which valid
prevalence estimates may be obtained. In contrast, the sensitivity and specificity can be estimated from
case-control studies.
Worked example Suppose the
fecal occult blood (FOB) screen test is used in 2,030 people to look for bowel cancer: The small positive predictive value (PPV = 10%) indicates that many of the positive results from this testing procedure are false positives. Thus it will be necessary to follow up any positive result with a more reliable test to obtain a more accurate assessment as to whether cancer is present. Nevertheless, such a test may be useful if it is inexpensive and convenient. The strength of the FOB screen test is instead in its negative predictive value — which, if negative for an individual, gives us a high confidence that its negative result is true. == Problems ==