In a
random experiment, the probabilities of all possible events (the
sample space) must total to 1— that is, some outcome must occur on every trial. For two events to be complements, they must be
collectively exhaustive, together filling the entire sample space. Therefore, the probability of an event's complement must be
unity minus the probability of the event. That is, for an event
A, :P(A^c) = 1 - P(A). Equivalently, the probabilities of an event and its complement must always total to 1. This does not, however, mean that
any two events whose probabilities total to 1 are each other's complements; complementary events must also fulfill the condition of
mutual exclusivity.
Example of the utility of this concept Suppose one throws an ordinary six-sided die eight times. What is the probability that one sees a "1" at least once? It may be tempting to say that : Pr(["1" on 1st trial] or ["1" on second trial] or ... or ["1" on 8th trial]) := Pr("1" on 1st trial) + Pr("1" on second trial) + ... + P("1" on 8th trial) := 1/6 + 1/6 + ... + 1/6 := 8/6 := 1.3333... This result cannot be right because a probability cannot be more than 1. The technique is wrong because the eight events whose probabilities got added are not mutually exclusive. One may resolve this overlap by the
principle of inclusion-exclusion, or, in this case, by simply finding the probability of the complementary event and subtracting it from 1, thus: : Pr(at least one "1") = 1 − Pr(no "1"s) := 1 − Pr([no "1" on 1st trial] and [no "1" on 2nd trial] and ... and [no "1" on 8th trial]) := 1 − Pr(no "1" on 1st trial) × Pr(no "1" on 2nd trial) × ... × Pr(no "1" on 8th trial) := 1 −(5/6) × (5/6) × ... × (5/6) := 1 − (5/6)8 := 0.7674... ==See also==