As discussed above, there are several context-dependent ways of defining the expected value. The simplest and original definition deals with the case of finitely many possible outcomes, such as in the flip of a coin. With the theory of infinite series, this can be extended to the case of countably many possible outcomes. It is also very common to consider the distinct case of random variables dictated by (piecewise-)continuous
probability density functions, as these arise in many natural contexts. All of these specific definitions may be viewed as special cases of the general definition based upon the mathematical tools of
measure theory and
Lebesgue integration, which provide these different contexts with an axiomatic foundation and common language. Any definition of expected value may be extended to define an expected value of a multidimensional random variable, i.e. a
random vector X. It is defined component by component, as E[X]_i = E[X_i]. Similarly, one may define the expected value of a
random matrix X with components X_{ij} by E[X]_{ij} = E[X_{ij}].
Random variables with finitely many outcomes Consider a random variable X with a
finite list x_1,...,x_k of possible outcomes, each of which (respectively) has probability p_1,...,p_k of occurring. The expectation of X is defined as \operatorname{E}[X] =x_1p_1 + x_2p_2 + \cdots + x_kp_k. Since the probabilities must satisfy p_1 + ...+ p_k = 1, it is natural to interpret E[X] as a
weighted average of the x_i values, with weights given by their probabilities p_i. In the special case that all possible outcomes are
equiprobable (that is p_1 = ...=p_k), the weighted average is given by the standard
average. In the general case, the expected value takes into account the fact that some outcomes are more likely than others.
Examples • Let X represent the outcome of a roll of a fair six-sided die. More specifically, X will be the number of
pips showing on the top face of the die after the toss. The possible values for X are 1, 2, 3, 4, 5, and 6, all of which are equally likely with a probability of . The expectation of X is \operatorname{E}[X] = 1 \cdot \frac{1}{6} + 2 \cdot \frac{1}{6} + 3\cdot\frac{1}{6} + 4\cdot\frac{1}{6} + 5\cdot\frac{1}{6} + 6\cdot\frac{1}{6} = 3.5. If one rolls the die n times and computes the average (
arithmetic mean) of the results, then as n grows, the average will
almost surely converge to the expected value, a fact known as the
strong law of large numbers. • The
roulette game consists of a small ball and a wheel with 38 numbered pockets around the edge. As the wheel is spun, the ball bounces around randomly until it settles down in one of the pockets. Suppose random variable X represents the (monetary) outcome of a $1 bet on a single number ("straight up" bet). If the bet wins (which happens with probability in American roulette), the payoff is $35; otherwise the player loses the bet. The expected profit from such a bet will be \operatorname{E}[\,\text{gain from }\$1\text{ bet}\,] = -\$1 \cdot \frac{37}{38} + \$35 \cdot \frac{1}{38} = -\$\frac{1}{19}. That is, the expected value to be won from a $1 bet is −$. Thus, in 190 bets, the net loss will probably be about $10.
Random variables with countably infinitely many outcomes Informally, the expectation of a random variable with a
countably infinite set of possible outcomes is defined analogously as the weighted average of all possible outcomes, where the weights are given by the probabilities of realizing each given value. This is to say that \operatorname{E}[X] = \sum_{i=1}^\infty x_i\, p_i, where x_1,x_2,... are the possible outcomes of the random variable X and p_1,p_2,... are their corresponding probabilities. In many non-mathematical textbooks, this is presented as the full definition of expected values in this context. However, there are some subtleties with infinite summation, so the above formula is not suitable as a mathematical definition. In particular, the
Riemann series theorem of
mathematical analysis illustrates that the value of certain infinite sums involving positive and negative summands depends on the order in which the summands are given. Since the outcomes of a random variable have no naturally given order, this creates a difficulty in defining expected value precisely. For this reason, many mathematical textbooks only consider the case that the infinite sum given above
converges absolutely, which implies that the infinite sum is a finite number independent of the ordering of summands. In the alternative case that the infinite sum does not converge absolutely, one says the random variable
does not have finite expectation. Example Suppose x_i = i and p_i = \tfrac{c}{i\,\cdot\,2^i} for i = 1, 2, 3, \ldots, where c = \tfrac{1}{\ln 2} is the scaling factor which makes the probabilities sum to 1: \sum_{i=1}^\infty p_i = \sum_{i=1}^\infty \frac{c}{i\cdot 2^i} = c\, \sum_{i=1}^\infty \frac1i\!\ \left(\frac12\right)^i = c\!\ \ln2 = 1 by the
logarithm series for \ln\left(1 - \tfrac12\right) = -\ln2. Then we have \mathrm{E}[X] = \sum_{i=1}^\infty x_i p_i = \sum_{i=1}^\infty i\cdot\frac{c}{i\cdot 2^i} = c\, \sum_{i=1}^\infty \left(\frac12\right)^i = c\cdot 1 = \frac{1}{\ln 2} due to the
geometric series for 1 \big/ \big(1 - \tfrac12\big).
Random variables with density Now consider a random variable X which has a
probability density function given by a function f on the
real number line. This means that the probability of X taking on any value in a given
open interval is given by the
integral of over that interval. The expectation of X is then given by the integral \operatorname{E}[X] = \int_{-\infty}^\infty x f(x)\, dx. A general and mathematically precise formulation of this definition uses
measure theory and
Lebesgue integration, and the corresponding theory of
absolutely continuous random variables is described in the next section. The density functions of many common distributions are
piecewise continuous, and as such the theory is often developed in this restricted setting. For such functions, it is sufficient to only consider the standard
Riemann integration. Sometimes
continuous random variables are defined as those corresponding to this special class of densities, although the term is used differently by various authors. Analogously to the countably-infinite case above, there are subtleties with this expression due to the infinite region of integration. Such subtleties can be seen concretely if the distribution of X is given by the
Cauchy distribution , so that f(x) = (x^2 + \pi^2)^{-1}. It is straightforward to compute in this case that \int_a^b xf(x)\,dx=\int_a^b \frac{x}{x^2+\pi^2}\,dx=\frac{1}{2}\ln\frac{b^2+\pi^2}{a^2+\pi^2}. The limit of this expression as a \to - \infty and b \to + \infty does not exist: if the limits are taken so that a = -b, then the limit is zero, while if the constraint 2a = -b is taken, then the limit is \ln(2). To avoid such ambiguities, in mathematical textbooks it is common to require that the given integral
converges absolutely, with E[X] left undefined otherwise. However, measure-theoretic notions as given below can be used to give a systematic definition of E[X] for more general random variables X.
Arbitrary real-valued random variables All definitions of the expected value may be expressed in the language of
measure theory. In general, if X is a real-valued
random variable defined on a
probability space (\Omega,\Sigma,P), then the expected value of X, denoted by E[X], is defined as the
Lebesgue integral \operatorname{E} [X] = \int_\Omega X\,d\operatorname{P}. Despite the newly abstract situation, this definition is extremely similar in nature to the very simplest definition of expected values, given above, as certain weighted averages. This is because, in measure theory, the value of the Lebesgue integral of X is defined via weighted averages of
approximations of X which take on finitely many values. Moreover, if given a random variable with finitely or countably many possible values, the Lebesgue theory of expectation is identical to the summation formulas given above. However, the Lebesgue theory clarifies the scope of the theory of probability density functions. A random variable X is said to be
absolutely continuous if any of the following conditions are satisfied: • there is a nonnegative
measurable function f on the real line such that \operatorname{P}(X \in A) = \int_A f(x) \, dx, for any
Borel set A, in which the integral is Lebesgue. • the
cumulative distribution function of A is
absolutely continuous. • for any Borel set A of real numbers with
Lebesgue measure equal to zero, the probability of X being valued in A is also equal to zero • for any positive number \varepsilon there is a positive number \delta such that: if A is a Borel set with Lebesgue measure less than \delta, then the probability of X being valued in A is less than \varepsilon. These conditions are all equivalent, although this is nontrivial to establish. In this definition, f is called the
probability density function of X (relative to Lebesgue measure). According to the change-of-variables formula for Lebesgue integration, combined with the
law of the unconscious statistician, it follows that \operatorname{E}[X] \equiv \int_\Omega X\,d\operatorname{P} = \int_\Reals x f(x)\, dx for any absolutely continuous random variable X. The above discussion of continuous random variables is thus a special case of the general Lebesgue theory, due to the fact that every piecewise-continuous function is measurable. The expected value of any real-valued random variable X can also be defined on the graph of its
cumulative distribution function F by a nearby equality of areas. In fact, \operatorname{E}[X] = \mu with a real number \mu if and only if the two surfaces in the x-y-plane, described by x \le \mu, \;\, 0\le y \le F(x) \quad\text{or}\quad x \ge \mu, \;\, F(x) \le y \le 1 respectively, have the same finite area, i.e. if \int_{-\infty}^\mu F(x)\,dx = \int_\mu^\infty \big(1 - F(x)\big)\,dx and both
improper Riemann integrals converge. Finally, this is equivalent to the representation \operatorname{E}[X] = \int_0^\infty \bigl(1 - F(x)\bigr) \, dx - \int_{-\infty}^0 F(x) \, dx, also with convergent integrals.
Example Let the daily
precipitation (unit: \textstyle \mathrm{L}/\mathrm{m}^2 = \mathrm{mm}) at a location be simply modeled as a real-valued random variable X for which the following holds: \mathrm{P}(X\!\!x) = \alpha\!\ \mathrm{e}^{-\lambda x}\; \text{ if } x \ge 0 with two positive constants \alpha and \lambda. The cumulative distribution function F\colon\, \R\to\R of X is thus obtained as F(x) = \begin{cases} 0 &\text{for } x Its only point of
discontinuity is x = 0 with jump height 1 - \alpha Therefore, the random variable X is neither discrete nor does it have a density. The
latter representation of \mathrm{E}[X] as difference of two improper Riemann integrals leads to \mathrm{E}[X] = \int_0^\infty \alpha\!\ \mathrm{e}^{-\lambda x}\, dx = \lim_{b\to\infty} \left[-\frac{\alpha}{\lambda}\,\mathrm{e}^{-\lambda x} \right]_0^b = \frac{\alpha}{\lambda}\,. For instance, the rough values \alpha = \tfrac12 and \lambda = \tfrac{1}{4\!\ \mathrm{mm}} result in the expected value \mathrm{E}[X] = 2\,\mathrm{mm}.
Infinite expected values Expected values as defined above are automatically finite numbers. However, in many cases it is fundamental to be able to consider expected values of \pm \infty. This is intuitive, for example, in the case of the
St. Petersburg paradox, in which one considers a random variable with possible outcomes x_i = 2^i, with associated probabilities p_i = 2^{-i}, for i ranging over all positive integers. According to the summation formula in the case of random variables with countably many outcomes, one has \operatorname{E}[X]= \sum_{i=1}^\infty x_i\,p_i = 2\cdot \frac{1}{2}+4\cdot\frac{1}{4} + 8\cdot\frac{1}{8}+ 16\cdot\frac{1}{16}+ \cdots = 1 + 1 + 1 + 1 + \cdots. It is natural to say that the expected value equals + \infty. There is a rigorous mathematical theory underlying such ideas, which is often taken as part of the definition of the Lebesgue integral. The first fundamental observation is that, whichever of the above definitions are followed, any
nonnegative random variable whatsoever can be given an unambiguous expected value; whenever absolute convergence fails, then the expected value can be defined as + \infty. The second fundamental observation is that any random variable can be written as the difference of two nonnegative random variables. Given a random variable X, one defines the
positive and negative parts by X^+ = \max(X,0) and X^- = \max(-X,0). These are nonnegative random variables, and it can be directly checked that X = X^+ - X^- . Since E[X^+] and E[X^-] are both then defined as either nonnegative numbers or , it is then natural to define: \operatorname{E}[X] = \begin{cases} \operatorname{E}[X^+] - \operatorname{E}[X^-] & \text{if } \operatorname{E}[X^+] According to this definition, E[X] exists and is finite if and only if E[X^+] and E[X^-] are both finite. Due to the formula \left| X \right| = X^+ + X^-, this is the case if and only if E[\left| X \right|] is finite, and this is equivalent to the absolute convergence conditions in the definitions above. As such, the present considerations do not define finite expected values in any cases not previously considered; they are only useful for infinite expectations. • In the case of the St. Petersburg paradox, one has X^- = 0 and so E[X] = + \infty as desired. • Suppose the random variable X takes values 1,-2,3,-4,... with respective probabilities 6\pi^{-2},6(2\pi)^{-2},6(3\pi)^{-2},6(4\pi)^{-2},... . Then it follows that X^+ takes value 2k-1 with probability 6((2k-1)\pi)^{-2} for each positive integer k , and takes value 0 with remaining probability. Similarly, X ^- takes value 2k with probability 6(2k\pi)^{-2} for each positive integer k and takes value 0 with remaining probability. Using the definition for non-negative random variables, one can show that both E[X^+] = \infty and E[X^-] = \infty (see
Harmonic series). Hence, in this case the expectation of X is undefined. • Similarly, the Cauchy distribution, as discussed above, has undefined expectation.
Tail-sum formula In the case of a non-negative integer-valued random variable X , the expected value can also be expressed in terms of its
tail probabilities (sometimes called the
tail-sum formula): :\operatorname{E}[X] = \sum_{k=0}^{\infty} \Pr(X > k). A more general version holds for any non-negative random variable (discrete or continuous): :\operatorname{E}[X] = \int_{0}^{\infty} \Pr(X > t)\, dt, where the integrand is the
survival function of X . ==Expected values of common distributions==