P(H1|y) > P(H2|y) / MAP testing In the case of making a decision between two
hypotheses,
H1, absent, and
H2, present, in the event of a particular
observation,
y, a classical approach is to choose
H1 when
p(H1|y) > p(H2|y) and
H2 in the reverse case. In the event that the two
a posteriori probabilities are equal, one might choose to default to a single choice (either always choose
H1 or always choose
H2), or might randomly select either
H1 or
H2. The
a priori probabilities of
H1 and
H2 can guide this choice, e.g. by always choosing the hypothesis with the higher
a priori probability. When taking this approach, usually what one knows are the conditional probabilities,
p(y|H1) and
p(y|H2), and the
a priori probabilities p(H1) = \pi_1 and p(H2) = \pi_2. In this case, p(H1|y) = \frac{p(y|H1) \cdot \pi_1}{p(y)} , p(H2|y) = \frac{p(y|H2) \cdot \pi_2}{p(y)} where
p(y) is the total probability of event
y, p(y|H1) \cdot \pi_1 + p(y|H2) \cdot \pi_2 .
H2 is chosen in case \frac{p(y|H2) \cdot \pi_2}{p(y|H1) \cdot \pi_1 + p(y|H2) \cdot \pi_2} \ge \frac{p(y|H1) \cdot \pi_1}{p(y|H1) \cdot \pi_1 + p(y|H2) \cdot \pi_2} \Rightarrow \frac{p(y|H2)}{p(y|H1)} \ge \frac{\pi_1}{\pi_2} and
H1 otherwise. Often, the ratio \frac{\pi_1}{\pi_2} is called \tau_{MAP} and \frac{p(y|H2)}{p(y|H1)} is called L(y), the
likelihood ratio. Using this terminology,
H2 is chosen in case L(y) \ge \tau_{MAP}. This is called MAP testing, where MAP stands for "maximum
a posteriori"). Taking this approach minimizes the expected number of errors one will make.
Bayes criterion In some cases, it is far more important to respond appropriately to
H1 than it is to respond appropriately to
H2. For example, if an alarm goes off, indicating H1 (an incoming bomber is carrying a
nuclear weapon), it is much more important to shoot down the bomber if H1 = TRUE, than it is to avoid sending a fighter squadron to inspect a
false alarm (i.e., H1 = FALSE, H2 = TRUE) (assuming a large supply of fighter squadrons). The
Bayes criterion is an approach suitable for such cases. Here a
utility is associated with each of four situations: • U_{11}: One responds with behavior appropriate to H1 and H1 is true: fighters destroy bomber, incurring fuel, maintenance, and weapons costs, take risk of some being shot down; • U_{12}: One responds with behavior appropriate to H1 and H2 is true: fighters sent out, incurring fuel and maintenance costs, bomber location remains unknown; • U_{21}: One responds with behavior appropriate to H2 and H1 is true: city destroyed; • U_{22}: One responds with behavior appropriate to H2 and H2 is true: fighters stay home, bomber location remains unknown; As is shown below, what is important are the differences, U_{11} - U_{21} and U_{22} - U_{12}. Similarly, there are four probabilities, P_{11}, P_{12}, etc., for each of the cases (which are dependent on one's decision strategy). The Bayes criterion approach is to maximize the expected utility: E\{U\} = P_{11} \cdot U_{11} + P_{21} \cdot U_{21} + P_{12} \cdot U_{12} + P_{22} \cdot U_{22} E\{U\} = P_{11} \cdot U_{11} + (1-P_{11}) \cdot U_{21} + P_{12} \cdot U_{12} + (1-P_{12}) \cdot U_{22} E\{U\} = U_{21} + U_{22} + P_{11} \cdot (U_{11} - U_{21}) - P_{12} \cdot (U_{22} - U_{12}) Effectively, one may maximize the sum, U' = P_{11} \cdot (U_{11} - U_{21}) - P_{12} \cdot (U_{22} - U_{12}) , and make the following substitutions: P_{11} = \pi_1 \cdot \int_{R_1}p(y|H1)\, dy P_{12} = \pi_2 \cdot \int_{R_1}p(y|H2)\, dy where \pi_1 and \pi_2 are the
a priori probabilities, P(H1) and P(H2), and R_1 is the region of observation events,
y, that are responded to as though
H1 is true. \Rightarrow U' = \int_{R_1} \left \{ \pi_1 \cdot (U_{11} - U_{21}) \cdot p(y|H1) - \pi_2 \cdot (U_{22} - U_{12}) \cdot p(y|H2) \right \} \, dy U' and thus U are maximized by extending R_1 over the region where \pi_1 \cdot (U_{11} - U_{21}) \cdot p(y|H1) - \pi_2 \cdot (U_{22} - U_{12}) \cdot p(y|H2) > 0 This is accomplished by deciding H2 in case \pi_2 \cdot (U_{22} - U_{12}) \cdot p(y|H2) \ge \pi_1 \cdot (U_{11} - U_{21}) \cdot p(y|H1) \Rightarrow L(y) \equiv \frac{p(y|H2)}{p(y|H1)} \ge \frac{\pi_1 \cdot (U_{11} - U_{21})}{\pi_2 \cdot (U_{22} - U_{12})} \equiv \tau_B and H1 otherwise, where
L(y) is the so-defined
likelihood ratio. ==See also==