MarketConditional expectation
Company Profile

Conditional expectation

In probability theory, the conditional expectation, conditional expected value, or conditional mean of a random variable is its expected value evaluated with respect to the conditional probability distribution. If the random variable can take on only a finite number of values, the "conditions" are that the variable can only take on a subset of those values. More formally, in the case when the random variable is defined over a discrete probability space, the "conditions" are a partition of this probability space.

Examples
Example 1: Dice rolling Consider the roll of a fair dice and let A = 1 if the number is even (i.e., 2, 4, or 6) and A = 0 otherwise. Furthermore, let B = 1 if the number is prime (i.e., 2, 3, or 5) and B = 0 otherwise. The unconditional expectation of A is E[A] = (0+1+0+1+0+1)/6 = 1/2, but the expectation of A conditional on B = 1 (i.e., conditional on the die roll being 2, 3, or 5) is E[A\mid B=1]=(1+0+0)/3=1/3, and the expectation of A conditional on B = 0 (i.e., conditional on the die roll being 1, 4, or 6) is E[A\mid B=0]=(0+1+1)/3=2/3. Likewise, the expectation of B conditional on A = 1 is E[B\mid A=1]= (1+0+0)/3=1/3, and the expectation of B conditional on A = 0 is E[B\mid A=0]=(0+1+1)/3=2/3. Example 2: Rainfall data Suppose we have daily rainfall data (mm of rain each day) collected by a weather station on every day of the ten-year (3652-day) period from January 1, 1990, to December 31, 1999. The unconditional expectation of rainfall for an unspecified day is the average of the rainfall amounts for those 3652 days. The conditional expectation of rainfall for an otherwise unspecified day known to be (conditional on being) in the month of March, is the average of daily rainfall over all 310 days of the ten–year period that fall in March. Similarly, the conditional expectation of rainfall conditional on days dated March 2 is the average of the rainfall amounts that occurred on the ten days with that specific date. == History ==
History
The related concept of conditional probability dates back at least to Laplace, who calculated conditional distributions. It was Andrey Kolmogorov who, in 1933, formalized it using the Radon–Nikodym theorem. == Definitions ==
Definitions
Conditioning on an event If is an event in \mathcal{F} with nonzero probability, and is a discrete random variable, the conditional expectation of given is : \begin{aligned} \operatorname{E} (X \mid A) &= \sum_x x P(X = x \mid A) \\ & =\sum_x x \frac{P(\{X = x\} \cap A)}{P(A)} \end{aligned} where the sum is taken over all possible outcomes of . If P(A) = 0, the conditional expectation is undefined due to the division by zero. Discrete random variables If and are discrete random variables, the conditional expectation of given is : \begin{aligned} \operatorname{E} (X \mid Y=y) &= \sum_x x P(X = x \mid Y = y) \\ &= \sum_x x \frac{P(X = x, Y = y)}{P(Y=y)} \end{aligned} where P(X = x, Y = y) is the joint probability mass function of and . The sum is taken over all possible outcomes of . As above, the expression is undefined if P(Y=y) = 0. Conditioning on a discrete random variable is the same as conditioning on the corresponding event: :\operatorname{E} (X \mid Y=y) = \operatorname{E} (X \mid A) where is the set \{ Y = y \}. Continuous random variables Let X and Y be continuous random variables with joint density f_{X,Y}(x,y), Y's density f_{Y}(y), and conditional density \textstyle f_{X\mid Y}(x\mid y) = \frac{ f_{X,Y}(x,y) }{f_{Y}(y)} of X given the event Y=y. The conditional expectation of X given Y=y is : \begin{aligned} \operatorname{E} (X \mid Y=y) &= \int_{-\infty}^\infty x f_{X\mid Y}(x\mid y) \, \mathrm{d}x \\ &= \frac{1}{f_Y(y)}\int_{-\infty}^\infty x f_{X,Y}(x,y) \, \mathrm{d}x. \end{aligned} When the denominator is zero, the expression is undefined. Conditioning on a continuous random variable is not the same as conditioning on the event \{ Y = y \} as it was in the discrete case. For a discussion, see Conditioning on an event of probability zero. Not respecting this distinction can lead to contradictory conclusions as illustrated by the Borel-Kolmogorov paradox. L2 random variables All random variables in this section are assumed to be in L^2, that is square integrable. In its full generality, conditional expectation is developed without this assumption, see below under Conditional expectation with respect to a sub-σ-algebra. The L^2 theory is, however, considered more intuitive and admits important generalizations. In the context of L^2 random variables, conditional expectation is also called regression. In what follows let (\Omega, \mathcal{F}, P) be a probability space, and X: \Omega \to \mathbb{R} in L^2 with mean \mu_X and variance \sigma_X^2. The expectation \mu_X minimizes the mean squared error: : \min_{x \in \mathbb{R}} \operatorname{E}\left((X - x)^2\right) = \operatorname{E}\left((X - \mu_X)^2\right) = \sigma_X^2. The conditional expectation of is defined analogously, except instead of a single number \mu_X, the result will be a function e_X(y). Let Y: \Omega \to \mathbb{R}^n be a random vector. The conditional expectation e_X: \mathbb{R}^n \to \mathbb{R} is a measurable function such that : \min_{g \text{ measurable }} \operatorname{E}\left((X - g(Y))^2\right) = \operatorname{E}\left((X - e_X(Y))^2\right). Note that unlike \mu_X, the conditional expectation e_X is not generally unique: there may be multiple minimizers of the mean squared error. Uniqueness Example 1: Consider the case where is the constant random variable that is always 1. Then the mean squared error is minimized by any function of the form : e_X(y) = \begin{cases} \mu_X & \text{if } y = 1, \\ \text{any number} & \text{otherwise.} \end{cases} Example 2: Consider the case where is the 2-dimensional random vector (X, 2X). Then clearly :\operatorname{E}(X \mid Y) = X but in terms of functions it can be expressed as e_X(y_1, y_2) = 3y_1-y_2 or e'_X(y_1, y_2) = y_2 - y_1 or infinitely many other ways. In the context of linear regression, this lack of uniqueness is called multicollinearity. Conditional expectation is unique up to a set of measure zero in \mathbb{R}^n. The measure used is the pushforward measure induced by . In the first example, the pushforward measure is a Dirac distribution at 1. In the second it is concentrated on the "diagonal" \{ y : y_2 = 2 y_1 \}, so that any set not intersecting it has measure 0. Existence The existence of a minimizer for \min_g \operatorname{E}\left((X - g(Y))^2\right) is non-trivial. It can be shown that : M := \{ g(Y) : g \text{ is measurable and }\operatorname{E}(g(Y)^2) is a closed subspace of the Hilbert space L^2(\Omega). By the Hilbert projection theorem, the necessary and sufficient condition for e_X to be a minimizer is that for all f(Y) in we have : \langle X - e_X(Y), f(Y) \rangle = 0. In words, this equation says that the residual X - e_X(Y) is orthogonal to the space of all functions of . This orthogonality condition, applied to the indicator functions f(Y) = 1_{Y \in H}, is used below to extend conditional expectation to the case that and are not necessarily in L^2. Connections to regression The conditional expectation is often approximated in applied mathematics and statistics due to the difficulties in analytically calculating it, and for interpolation. The Hilbert subspace : M = \{ g(Y) : \operatorname{E}(g(Y)^2) defined above is replaced with subsets thereof by restricting the functional form of , rather than allowing any measurable function. Examples of this are decision tree regression when is required to be a simple function, linear regression when is required to be affine, etc. These generalizations of conditional expectation come at the cost of many of its properties no longer holding. For example, let be the space of all linear functions of and let \mathcal{E}_{M} denote this generalized conditional expectation/L^2 projection. If M does not contain the constant functions, the tower property \operatorname{E}(\mathcal{E}_M(X)) = \operatorname{E}(X) will not hold. An important special case is when and are jointly normally distributed. In this case it can be shown that the conditional expectation is equivalent to linear regression: : e_X(Y) = \alpha_0 + \sum_i \alpha_i Y_i for coefficients \{\alpha_i\}_{i = 0..n} described in Multivariate normal distribution#Conditional distributions. Conditional expectation with respect to a sub-σ-algebra File:LokaleMittelwertbildung.svg|thumb|upright=1.5|'Conditional expectation with respect to a σ-algebra:' in this example the probability space (\Omega, \mathcal{F}, P) is the [0,1] interval with the Lebesgue measure. We define the following σ-algebras: \mathcal{A} = \mathcal{F}; \mathcal{B} is the σ-algebra generated by the intervals with end-points 0, , , , 1; and \mathcal{C} is the σ-algebra generated by the intervals with end-points 0, , 1. Here the conditional expectation is effectively the average over the minimal sets of the σ-algebra. Consider the following: • (\Omega, \mathcal{F}, P) is a probability space. • X\colon\Omega \to \mathbb{R}^n is a random variable on that probability space with finite expectation. • \mathcal{H} \subseteq \mathcal{F} is a sub-σ-algebra of \mathcal{F}. Since \mathcal{H} is a sub \sigma-algebra of \mathcal{F}, the function X\colon\Omega \to \mathbb{R}^n is usually not \mathcal{H}-measurable, thus the existence of the integrals of the form \int_H X \,dP|_\mathcal{H}, where H\in\mathcal{H} and P|_\mathcal{H} is the restriction of P to \mathcal{H}, cannot be stated in general. However, the local averages \int_H X\,dP can be recovered in (\Omega, \mathcal{H}, P|_\mathcal{H}) with the help of the conditional expectation. A conditional expectation of X given \mathcal{H}, denoted as \operatorname{E}(X\mid\mathcal{H}), is any \mathcal{H}-measurable function \Omega \to \mathbb{R}^n which satisfies: : \int_H\operatorname{E}(X \mid \mathcal{H})\,\mathrm{d}P = \int_H X \,\mathrm{d}P for each H \in \mathcal{H}. The Law of the unconscious statistician is then : \operatorname{E}[f(X)\mid\mathcal{H}] = \int f(x) \kappa_\mathcal{H}(-, \mathrm{d}x), This shows that conditional expectations are, like their unconditional counterparts, integrations, against a conditional measure. General Definition In full generality, consider: • A probability space (\Omega,\mathcal{A},P). • A Banach space (E,\|\cdot\|_E). • A Bochner integrable random variable X:\Omega\to E. • A sub-σ-algebra \mathcal{H}\subseteq \mathcal{A}. The conditional expectation of X given \mathcal{H} is the up to a P-nullset unique and integrable E-valued \mathcal{H}-measurable random variable \operatorname{E}(X \mid \mathcal{H}) satisfying :\int_H \operatorname{E}(X \mid \mathcal{H}) \,\mathrm{d}P = \int_H X \,\mathrm{d}P for all H \in \mathcal{H}. In this setting the conditional expectation is sometimes also denoted in operator notation as \operatorname{E}^\mathcal{H}X. == Basic properties ==
Basic properties
All the following formulas are to be understood in an almost sure sense. • Pulling out independent factors: • If X is independent of \mathcal{H}, then E(X\mid\mathcal{H}) = E(X). Let B \in \mathcal{H}. Then X is independent of 1_B, so we get that :\int_B X\,dP = E(X1_B) = E(X)E(1_B) = E(X)P(B) = \int_B E(X)\,dP. Thus the definition of conditional expectation is satisfied by the constant random variable E(X), as desired. \square • If X is independent of \sigma(Y, \mathcal{H}), then E(XY\mid \mathcal{H}) = E(X) \, E(Y\mid\mathcal{H}). Note that this is not necessarily the case if X is only independent of \mathcal{H} and of Y. • If X,Y are independent, \mathcal{G},\mathcal{H} are independent, X is independent of \mathcal{H} and Y is independent of \mathcal{G}, then E(E(XY\mid\mathcal{G})\mid\mathcal{H}) = E(X) E(Y) = E(E(XY\mid\mathcal{H})\mid\mathcal{G}). • Stability: • If X is \mathcal{H}-measurable, then E(X\mid\mathcal{H}) = X. For each H\in \mathcal{H} we have \int_H E(X\mid\mathcal{H}) \, dP = \int_H X \, dP, or equivalently : \int_H \big( E(X\mid\mathcal{H}) - X \big) \, dP = 0 Since this is true for each H \in \mathcal{H}, and both E(X\mid\mathcal{H}) and X are \mathcal{H}-measurable (the former property holds by definition; the latter property is key here), from this one can show : \int_H \big| E(X\mid\mathcal{H}) - X \big| \, dP = 0 And this implies E(X\mid\mathcal{H}) = X almost everywhere. \square • In particular, for sub-σ-algebras \mathcal{H}_1\subset\mathcal{H}_2 \subset\mathcal{F} we have E(E(X\mid\mathcal{H}_1)\mid\mathcal{H}_2) = E(X\mid\mathcal{H}_1). (Note this is different from the tower property below.) • If Z is a random variable, then \operatorname{E}(f(Z) \mid Z)=f(Z). In its simplest form, this says \operatorname{E}(Z \mid Z)=Z. • Pulling out known factors: • If X is \mathcal{H}-measurable, then E(XY\mid\mathcal{H}) = X \, E(Y\mid\mathcal{H}). All random variables here are assumed without loss of generality to be non-negative. The general case can be treated with X = X^+ - X^-. Fix A \in \mathcal{H} and let X = 1_A. Then for any H \in \mathcal{H} :\int_H E(1_A Y \mid \mathcal{H}) \, dP = \int_H 1_A Y \, dP = \int_{A \cap H} Y \, dP = \int_{A\cap H} E(Y\mid\mathcal{H}) \, dP = \int_H 1_A E(Y \mid \mathcal{H}) \, dP Hence E(1_A Y \mid \mathcal{H}) = 1_A E(Y\mid\mathcal{H}) almost everywhere. Any simple function is a finite linear combination of indicator functions. By linearity the above property holds for simple functions: if X_n is a simple function then E(X_n Y \mid \mathcal{H}) = X_n \, E(Y\mid \mathcal{H}). Now let X be \mathcal{H}-measurable. Then there exists a sequence of simple functions \{ X_n \}_{n\geq 1} converging monotonically (here meaning X_n \leq X_{n+1}) and pointwise to X. Consequently, for Y \geq 0 , the sequence \{ X_n Y \}_{n\geq 1} converges monotonically and pointwise to X Y . Also, since E(Y\mid\mathcal{H}) \geq 0, the sequence \{ X_n E(Y\mid\mathcal{H}) \}_{n\geq 1} converges monotonically and pointwise to X \, E(Y\mid\mathcal{H}) Combining the special case proved for simple functions, the definition of conditional expectation, and deploying the monotone convergence theorem: : \int_H X \, E(Y\mid\mathcal{H}) \, dP = \int_H \lim_{n \to \infty} X_n \, E(Y\mid\mathcal{H}) \, dP = \lim_{n \to \infty} \int_H X_n E(Y\mid\mathcal{H}) \, dP = \lim_{n \to \infty} \int_H E(X_n Y\mid\mathcal{H}) \, dP = \lim_{n \to \infty} \int_H X_n Y \, dP = \int_H \lim_{n\to \infty} X_n Y \, dP = \int_H XY \, dP = \int_H E(XY\mid\mathcal{H}) \, dP This holds for all H\in \mathcal{H}, whence X \, E(Y\mid\mathcal{H}) = E(XY\mid\mathcal{H}) almost everywhere. \square • If Z is a random variable, then \operatorname{E}(f(Z) Y \mid Z)=f(Z)\operatorname{E}(Y \mid Z). • Law of total expectation: E(E(X \mid \mathcal{H})) = E(X). • Tower property: • For sub-σ-algebras \mathcal{H}_1\subset\mathcal{H}_2 \subset\mathcal{F} we have E(E(X\mid\mathcal{H}_2)\mid\mathcal{H}_1) = E(X\mid\mathcal{H}_1). • A special case \mathcal{H}_1=\{\emptyset, \Omega\} recovers the Law of total expectation: E(E(X\mid\mathcal{H}_2) ) = E(X). • A special case is when Z is a \mathcal{H}-measurable random variable. Then \sigma(Z) \subset \mathcal{H} and thus E(E(X \mid \mathcal{H}) \mid Z) = E(X \mid Z). • Doob martingale property: the above with Z = E(X \mid \mathcal{H}) (which is \mathcal{H}-measurable), and using also \operatorname{E}(Z \mid Z)=Z, gives E(X \mid E(X \mid \mathcal{H})) = E(X \mid \mathcal{H}). • For random variables X,Y we have E(E(X\mid Y)\mid f(Y)) = E(X\mid f(Y)). • For random variables X,Y,Z we have E(E(X\mid Y,Z)\mid Y) = E(X\mid Y). • Linearity: we have E(X_1 + X_2 \mid \mathcal{H}) = E(X_1 \mid \mathcal{H}) + E(X_2 \mid \mathcal{H}) and E(a X \mid \mathcal{H}) = a\,E(X \mid \mathcal{H}) for a\in\R. • Positivity: If X \ge 0 then E(X \mid \mathcal{H}) \ge 0. • Monotonicity: If X_1 \le X_2 then E(X_1 \mid \mathcal{H}) \le E(X_2 \mid \mathcal{H}). • Monotone convergence: If 0\leq X_n \uparrow X then E(X_n \mid \mathcal{H}) \uparrow E(X \mid \mathcal{H}). • Dominated convergence: If X_n \to X and |X_n| \le Y with Y \in L^1, then E(X_n \mid \mathcal{H}) \to E(X \mid \mathcal{H}). • Fatou's lemma: If \textstyle E(\inf_n X_n \mid \mathcal{H}) > -\infty then \textstyle E(\liminf_{n\to\infty} X_n \mid \mathcal{H}) \le \liminf_{n\to\infty} E(X_n \mid \mathcal{H}). • Jensen's inequality: If f \colon \mathbb{R} \rightarrow \mathbb{R} is a convex function, then f(E(X\mid \mathcal{H})) \le E(f(X)\mid\mathcal{H}). • Conditional variance: Using the conditional expectation we can define, by analogy with the definition of the variance as the mean square deviation from the average, the conditional variance • Definition: \operatorname{Var}(X \mid \mathcal{H}) = \operatorname{E}\bigl( (X - \operatorname{E}(X \mid \mathcal{H}))^2 \mid \mathcal{H} \bigr) • Algebraic formula for the variance: \operatorname{Var}(X \mid \mathcal{H}) = \operatorname{E}(X^2 \mid \mathcal{H}) - \bigl(\operatorname{E}(X \mid \mathcal{H})\bigr)^2 • Law of total variance: \operatorname{Var}(X) = \operatorname{E}(\operatorname{Var}(X \mid \mathcal{H})) + \operatorname{Var}(\operatorname{E}(X \mid \mathcal{H})). • Martingale convergence: For a random variable X, that has finite expectation, we have E(X\mid\mathcal{H}_n) \to E(X\mid\mathcal{H}), if either \mathcal{H}_1 \subset \mathcal{H}_2 \subset \dotsb is an increasing series of sub-σ-algebras and \textstyle \mathcal{H} = \sigma(\bigcup_{n=1}^\infty \mathcal{H}_n) or if \mathcal{H}_1 \supset \mathcal{H}_2 \supset \dotsb is a decreasing series of sub-σ-algebras and \textstyle \mathcal{H} = \bigcap_{n=1}^\infty \mathcal{H}_n. • Conditional expectation as L^2-projection: If X,Y are in the Hilbert space of square-integrable real random variables (real random variables with finite second moment) then • for \mathcal{H}-measurable Y, we have E(Y(X - E(X\mid\mathcal{H}))) = 0, i.e. the conditional expectation E(X\mid\mathcal{H}) is in the sense of the L2(P) scalar product the orthogonal projection from X to the linear subspace of \mathcal{H}-measurable functions. (This allows to define and prove the existence of the conditional expectation based on the Hilbert projection theorem.) • the mapping X \mapsto \operatorname{E}(X\mid\mathcal{H}) is self-adjoint: \operatorname E(X \operatorname E(Y \mid \mathcal{H})) = \operatorname E\left(\operatorname E(X \mid \mathcal{H}) \operatorname E(Y \mid \mathcal{H})\right) = \operatorname E(\operatorname E(X \mid \mathcal{H}) Y) • Conditioning is a contractive projection of Lp spaces L^p(\Omega, \mathcal{F}, P) \rightarrow L^p(\Omega, \mathcal{H}, P). I.e., \operatorname{E}\big(|\operatorname{E}(X \mid\mathcal{H})|^p \big) \le \operatorname{E}\big(|X|^p\big) for any p ≥ 1. • Doob's conditional independence property: If X,Y are conditionally independent given Z, then P(X \in B\mid Y,Z) = P(X \in B\mid Z) (equivalently, E(1_{\{X \in B\}}\mid Y,Z) = E(1_{\{X \in B\}} \mid Z)). ==See also==
tickerdossier.comtickerdossier.substack.com