In
medical diagnosis, test sensitivity is the ability of a test to correctly identify those with the disease (true positive rate), whereas test specificity is the ability of the test to correctly identify those without the disease (true negative rate). If 100 patients known to have a disease were tested, and 43 test positive, then the test has 43% sensitivity. If 100 with no disease are tested and 96 return a completely negative result, then the test has 96% specificity. Sensitivity and specificity are prevalence-independent test characteristics, as their values are intrinsic to the test and do not depend on the disease
prevalence in the population of interest.
Positive and
negative predictive values, but not sensitivity or specificity, are values influenced by the prevalence of disease in the population that is being tested.
Misconceptions It is often claimed that a highly specific test is effective at ruling in a disease when positive, while a highly sensitive test is deemed effective at ruling out a disease when negative. This has led to the widely used mnemonics SPPIN and SNNOUT, according to which a highly
specific test, when
positive, rules
in disease (SP-P-IN), and a highly
se
nsitive test, when
negative, rules
out disease (SN-N-OUT). Both rules of thumb are, however, inferentially misleading, as the diagnostic power of any test is determined by the prevalence of the condition being tested, the test's sensitivity
and its specificity. The SNNOUT mnemonic has some validity when the prevalence of the condition in question is extremely low in the tested sample. The tradeoff between specificity and sensitivity is explored in
ROC analysis as a trade off between recall (TPR) and
False positive rate (FPR ). Giving them equal weight optimizes
informedness = specificity + sensitivity − 1 = TPR − FPR, the magnitude of which gives the probability of an informed decision between the two classes (> 0 represents appropriate use of information, 0 represents chance-level performance, \mu_S and \sigma_S, and \mu_N and \sigma_N, respectively,
d′ is defined as: {{block indent|1=d^\prime = \frac{\mu_S - \mu_N}{\sqrt{\frac{1}{2}\left(\sigma_S^2 + \sigma_N^2\right)}}}} An estimate of
d′ can be also found from measurements of the
hit rate and
false-alarm rate. It is calculated as: where function
Z(
p),
p ∈ [0, 1], is the inverse of the
cumulative Gaussian distribution.
d′ is a
dimensionless statistic. A higher
d′ indicates that the signal can be more readily detected. == Confusion matrix ==