Let (\Omega, \mathcal{F}, P) be a
probability space, \mathcal{G} \subseteq \mathcal{F} a \sigma-field in \mathcal{F}. Given A\in \mathcal{F}, the
Radon–Nikodym theorem implies that there is a \mathcal{G}-measurable random variable P(A\mid\mathcal{G}):\Omega\to \mathbb{R}, called the
conditional probability, such that\int_G P(A\mid\mathcal{G})(\omega) dP(\omega)=P(A\cap G)for every G\in \mathcal{G}, and such a random variable is uniquely defined up to sets of probability zero. A conditional probability is called
regular if \operatorname{P}(\cdot\mid\mathcal{G})(\omega) is a
probability measure on (\Omega, \mathcal{F}) for all \omega \in \Omega a.e. Special cases: • For the trivial sigma algebra \mathcal G= \{\emptyset,\Omega\}, the conditional probability is the constant function \operatorname{P}\!\left( A\mid \{\emptyset,\Omega\} \right) = \operatorname{P}(A). • If A\in \mathcal{G}, then \operatorname{P}(A\mid\mathcal{G})=1_A, the indicator function (defined
below). Let X : \Omega \to E be a (E, \mathcal{E})-valued random variable. For each B \in \mathcal{E}, define \mu_{X \, | \, \mathcal{G}} (B \, |\, \mathcal{G}) = \mathrm{P} (X^{-1}(B) \, | \, \mathcal{G}).For any \omega \in \Omega, the function \mu_{X \, | \mathcal{G}}(\cdot \, | \mathcal{G}) (\omega) : \mathcal{E} \to \mathbb{R} is called the
conditional probability distribution of X given \mathcal{G}. If it is a probability measure on (E, \mathcal{E}), then it is called
regular. For a real-valued random variable (with respect to the Borel \sigma-field \mathcal{R}^1 on \mathbb{R}), every conditional probability distribution is regular. In this case,E[X \mid \mathcal{G}] = \int_{-\infty}^\infty x \, \mu_{X \mid \mathcal{G}}(d x, \cdot)
almost surely.
Relation to conditional expectation For any event A \in \mathcal{F}, define the
indicator function: :\mathbf{1}_A (\omega) = \begin{cases} 1 \; &\text{if } \omega \in A, \\ 0 \; &\text{if } \omega \notin A, \end{cases} which is a random variable. Note that the expectation of this random variable is equal to the probability of
A itself: :\operatorname{E}(\mathbf{1}_A) = \operatorname{P}(A). \; Given a \sigma-field \mathcal{G} \subseteq \mathcal{F}, the conditional probability \operatorname{P}(A\mid\mathcal{G}) is a version of the
conditional expectation of the indicator function for A: :\operatorname{P}(A\mid\mathcal{G}) = \operatorname{E}(\mathbf{1}_A\mid\mathcal{G}) \; An expectation of a random variable with respect to a regular conditional probability is equal to its conditional expectation.
Interpretation of conditioning on a Sigma Field Consider the probability space (\Omega, \mathcal{F}, \mathbb{P}) and a sub-sigma field \mathcal{A} \subset \mathcal{F}. The sub-sigma field \mathcal{A} can be loosely interpreted as containing a subset of the information in \mathcal{F}. For example, we might think of \mathbb{P}(B|\mathcal{A}) as the probability of the event B given the information in \mathcal{A}. Also recall that an event B is independent of a sub-sigma field \mathcal{A} if \mathbb{P}(B | A) = \mathbb{P}(B) for all A \in \mathcal{A}. It is incorrect to conclude in general that the information in \mathcal{A} does not tell us anything about the probability of event B occurring. This can be shown with a counter-example: Consider a probability space on the
unit interval, \Omega = [0, 1]. Let \mathcal{G} be the sigma-field of all countable sets and sets whose complement is countable. So each set in \mathcal{G} has measure 0 or 1 and so is independent of each event in \mathcal{F}. However, notice that \mathcal{G} also contains all the singleton events in \mathcal{F} (those sets which contain only a single \omega \in \Omega). So knowing which of the events in \mathcal{G} occurred is equivalent to knowing exactly which \omega \in \Omega occurred! So in one sense, \mathcal{G} contains no information about \mathcal{F} (it is independent of it), and in another sense it contains all the information in \mathcal{F}. ==See also==