Consider an experiment that can produce a number of results. The collection of all possible results is called the
sample space of the experiment, sometimes denoted as \Omega. The
power set of the sample space is formed by considering all different collections of possible results. For example, rolling a die can produce six possible results. One collection of possible results gives an odd number on the die. Thus, the subset {1,3,5} is an element of the
power set of the sample space of dice rolls. These collections are called "events". In this case, {1,3,5} is the event that the die falls on some odd number. If the results that actually occur fall in a given event, the event is said to have occurred. A probability is a
way of assigning every event a value between zero and one, with the requirement that the event made up of all possible results (in our example, the event {1,2,3,4,5,6}) is assigned a value of one. To qualify as a probability, the assignment of values must satisfy the requirement that for any collection of mutually exclusive events (events with no common results, such as the events {1,6}, {3}, and {2,4}), the probability that at least one of the events will occur is given by the sum of the probabilities of all the individual events. The probability of an
event A is written as P(A), p(A), or \text{Pr}(A). This mathematical definition of probability can extend to infinite sample spaces, and even uncountable sample spaces, using the concept of a measure. The
opposite or
complement of an event
A is the event [not
A] (that is, the event of
A not occurring), often denoted as A', A^c, \overline{A}, A^\complement, \neg A, or {\sim}A; its probability is given by . As an example, the chance of not rolling a six on a six-sided die is For a more comprehensive treatment, see
Complementary event. If two events
A and
B occur on a single performance of an experiment, this is called the intersection or
joint probability of
A and
B, denoted as P(A \cap B).
Independent events If two events,
A and
B are
independent then the joint probability is
Mutually exclusive events If either event
A or event
B can occur but never both simultaneously, then they are called mutually exclusive events. If two events are
mutually exclusive, then the probability of
both occurring is denoted as P(A \cap B) andP(A \mbox{ and }B) = P(A \cap B) = 0 If two events are
mutually exclusive, then the probability of
either occurring is denoted as P(A \cup B) andP(A\mbox{ or }B) = P(A \cup B)= P(A) + P(B) - P(A \cap B) = P(A) + P(B) - 0 = P(A) + P(B) For example, the chance of rolling a 1 or 2 on a six-sided die is P(1\mbox{ or }2) = P(1) + P(2) = \tfrac{1}{6} + \tfrac{1}{6} = \tfrac{1}{3}.
Not (necessarily) mutually exclusive events If the events are not (necessarily) mutually exclusive thenP\left(A \hbox{ or } B\right) = P(A \cup B) = P\left(A\right)+P\left(B\right)-P\left(A \mbox{ and } B\right). Rewritten, P\left( A\cup B\right) =P\left( A\right) +P\left( B\right) -P\left( A\cap B\right) For example, when drawing a card from a deck of cards, the chance of getting a heart or a face card (J, Q, K) (or both) is \tfrac{13}{52} + \tfrac{12}{52} - \tfrac{3}{52} = \tfrac{11}{26}, since among the 52 cards of a deck, 13 are hearts, 12 are face cards, and 3 are both: here the possibilities included in the "3 that are both" are included in each of the "13 hearts" and the "12 face cards", but should only be counted once. This can be expanded further for multiple not (necessarily) mutually exclusive events. For three events, this proceeds as follows: \begin{aligned}P\left( A\cup B\cup C\right) =&P\left( \left( A\cup B\right) \cup C\right) \\ =&P\left( A\cup B\right) +P\left( C\right) -P\left( \left( A\cup B\right) \cap C\right) \\ =&P\left( A\right) +P\left( B\right) -P\left( A\cap B\right) +P\left( C\right) -P\left( \left( A\cap C\right) \cup \left( B\cap C\right) \right) \\ =&P\left( A\right) +P\left( B\right) +P\left( C\right) -P\left( A\cap B\right) -\left( P\left( A\cap C\right) +P\left( B\cap C\right) -P\left( \left( A\cap C\right) \cap \left( B\cap C\right) \right) \right) \\ P\left( A\cup B\cup C\right) =&P\left( A\right) +P\left( B\right) +P\left( C\right) -P\left( A\cap B\right) -P\left( A\cap C\right) -P\left( B\cap C\right) +P\left( A\cap B\cap C\right) \end{aligned} It can be seen, then, that this pattern can be repeated for any number of events.
Conditional probability Conditional probability is the probability of some event
A, given the occurrence of some other event
B. Conditional probability is written P(A \mid B), and is read "the probability of
A, given
B". It is defined by P(A \mid B) = \frac{P(A \cap B)}{P(B)}\, If P(B)=0 then P(A \mid B) is formally
undefined by this expression. In this case A and B are independent, since P(A \cap B) = P(A)P(B) = 0. However, it is possible to define a conditional probability for some zero-probability events, for example by using a
σ-algebra of such events (such as those arising from a
continuous random variable). For example, in a bag of 2 red balls and 2 blue balls (4 balls in total), the probability of taking a red ball is 1/2; however, when taking a second ball, the probability of it being either a red ball or a blue ball depends on the ball previously taken. For example, if a red ball was taken, then the probability of picking a red ball again would be 1/3, since only 1 red and 2 blue balls would have been remaining. And if a blue ball was taken previously, the probability of taking a red ball will be 2/3.
Inverse probability In
probability theory and applications, ''
Bayes' rule relates the odds of event A_1 to event A_2, before (prior to) and after (posterior to) conditioning on another event B. The odds on A_1 to event A_2 is simply the ratio of the probabilities of the two events. When arbitrarily many events A are of interest, not just two, the rule can be rephrased as posterior is proportional to prior times likelihood'', P(A|B)\propto P(A) P(B|A) where the proportionality symbol means that the left hand side is proportional to (i.e., equals a constant times) the right hand side as A varies, for fixed or given B (Lee, 2012; Bertsch McGrayne, 2012). In this form it goes back to Laplace (1774) and to Cournot (1843); see Fienberg (2005).
Summary of probabilities ==Relation to randomness and probability in quantum mechanics==