Mean, variance, moments, and median .
F−1(1/2). The mean or
expected value of an exponentially distributed random variable
X with rate parameter
λ is given by \operatorname{E}[X] = \frac{1}{\lambda}. In light of the examples given
below, this makes sense; a person who receives an average of two telephone calls per hour can expect that the time between consecutive calls will be 0.5 hour, or 30 minutes. The
variance of
X is given by \operatorname{Var}[X] = \frac{1}{\lambda^2}, so the
standard deviation is equal to the mean. The
moments of
X, for n\in\N are given by \operatorname{E}\left[X^n\right] = \frac{n!}{\lambda^n}. The
central moments of
X, for n\in\N are given by \mu_n = \frac{!n}{\lambda^n} = \frac{n!}{\lambda^n}\sum^n_{k=0}\frac{(-1)^k}{k!}. where !
n is the
subfactorial of
n. The
median of
X is given by \operatorname{m}[X] = \frac{\ln(2)}{\lambda} where refers to the
natural logarithm. Thus the
absolute difference between the mean and median is \left|\operatorname{E}\left[X\right] - \operatorname{m}\left[X\right]\right| = \frac{1 - \ln(2)}{\lambda} in accordance with the
median-mean inequality.
Memorylessness property of exponential random variable An exponentially distributed random variable
T obeys the relation \Pr \left (T > s + t \mid T > s \right ) = \Pr(T > t), \qquad \forall s, t \ge 0. This can be seen by considering the
complementary cumulative distribution function: \begin{align} \Pr\left(T > s + t \mid T > s\right) &= \frac{\Pr\left(T > s + t \cap T > s\right)}{\Pr\left(T > s\right)} \\[4pt] &= \frac{\Pr\left(T > s + t \right)}{\Pr\left(T > s\right)} \\[4pt] &= \frac{e^{-\lambda(s + t)}}{e^{-\lambda s}} \\[4pt] &= e^{-\lambda t} \\[4pt] &= \Pr(T > t). \end{align} When
T is interpreted as the waiting time for an event to occur relative to some initial time, this relation implies that, if
T is conditioned on a failure to observe the event over some initial period of time
s, the distribution of the remaining waiting time is the same as the original unconditional distribution. For example, if an event has not occurred after 30 seconds, the
conditional probability that occurrence will take at least 10 more seconds is equal to the unconditional probability of observing the event more than 10 seconds after the initial time. The exponential distribution and the
geometric distribution are
the only memoryless probability distributions. The exponential distribution is consequently also necessarily the only continuous probability distribution that has a constant
failure rate.
Quantiles The
quantile function (inverse cumulative distribution function) for Exp(
λ) is F^{-1}(p;\lambda) = \frac{-\ln(1-p)}{\lambda},\qquad 0 \le p The
quartiles are therefore: • first quartile: ln(4/3)/
λ •
median: ln(2)/
λ • third quartile: ln(4)/
λ And as a consequence the
interquartile range is ln(3)/
λ.
Conditional Value at Risk (Expected Shortfall) The conditional value at risk (CVaR) also known as the
expected shortfall or superquantile for Exp(
λ) is derived as follows: \begin{align} \bar{q}_\alpha (X) &= \frac{1}{1-\alpha} \int_{\alpha}^{1} q_p (X) dp \\ &= \frac{1}{(1-\alpha)} \int_{\alpha}^{1} \frac{-\ln (1 - p )}{\lambda} dp \\ &= \frac{-1}{\lambda(1-\alpha)} \int_{1-\alpha}^{0} -\ln (y ) dy \\ &= \frac{-1}{\lambda(1-\alpha)} \int_{0}^{1 - \alpha} \ln (y ) dy \\ &= \frac{-1}{\lambda(1-\alpha)} [ ( 1-\alpha) \ln(1-\alpha) - (1-\alpha) ] \\ &= \frac{ - \ln(1-\alpha) + 1 } { \lambda} \\ \end{align}
Buffered Probability of Exceedance (bPOE) The buffered probability of exceedance is one minus the probability level at which the CVaR equals the threshold x. It is derived as follows:
Distribution of the minimum of exponential random variables Let
X1, ...,
Xn be
independent exponentially distributed random variables with rate parameters
λ1, ...,
λn. Then \min\left\{X_1, \dotsc, X_n \right\} is also exponentially distributed, with parameter \lambda = \lambda_1 + \dotsb + \lambda_n. This can be seen by considering the
complementary cumulative distribution function: \begin{align} &\Pr\left(\min\{X_1, \dotsc, X_n\} > x\right) \\ ={} &\Pr\left(X_1 > x, \dotsc, X_n > x\right) \\ ={} &\prod_{i=1}^n \Pr\left(X_i > x\right) \\ ={} &\prod_{i=1}^n \exp\left(-x\lambda_i\right) = \exp\left(-x\sum_{i=1}^n \lambda_i\right). \end{align} The index of the variable which achieves the minimum is distributed according to the categorical distribution \Pr\left(X_k = \min\{X_1, \dotsc, X_n\}\right) = \frac{\lambda_k}{\lambda_1 + \dotsb + \lambda_n}. A proof can be seen by letting I = \operatorname{argmin}_{i \in \{1, \dotsb, n\}}\{X_1, \dotsc, X_n\}. Then, \begin{align} \Pr (I = k) &= \int_{0}^{\infty} \Pr(X_k = x) \Pr(\forall_{i\neq k}X_{i} > x ) \,dx \\ &= \int_{0}^{\infty} \lambda_k e^{- \lambda_k x} \left(\prod_{i=1, i\neq k}^{n} e^{- \lambda_i x}\right) dx \\ &= \lambda_k \int_{0}^{\infty} e^{- \left(\lambda_1 + \dotsb +\lambda_n\right) x} dx \\ &= \frac{\lambda_k}{\lambda_1 + \dotsb + \lambda_n}. \end{align} Note that \max\{X_1, \dotsc, X_n\} is not exponentially distributed, if
X1, ...,
Xn do not all have parameter 0.
Joint moments of i.i.d. exponential order statistics Let X_1, \dotsc, X_n be n
independent and identically distributed exponential random variables with rate parameter
λ. Let X_{(1)}, \dotsc, X_{(n)} denote the corresponding
order statistics. For i , the joint moment \operatorname E\left[X_{(i)} X_{(j)}\right] of the order statistics X_{(i)} and X_{(j)} is given by \begin{align} \operatorname E\left[X_{(i)} X_{(j)}\right] &= \sum_{k=0}^{j-1}\frac{1}{(n - k)\lambda} \operatorname E\left[X_{(i)}\right] + \operatorname E\left[X_{(i)}^2\right] \\ &= \sum_{k=0}^{j-1}\frac{1}{(n - k)\lambda}\sum_{k=0}^{i-1}\frac{1}{(n - k)\lambda} + \sum_{k=0}^{i-1}\frac{1}{((n - k)\lambda)^2} + \left(\sum_{k=0}^{i-1}\frac{1}{(n - k)\lambda}\right)^2. \end{align} This can be seen by invoking the
law of total expectation and the memoryless property: \begin{align} \operatorname E\left[X_{(i)} X_{(j)}\right] &= \int_0^\infty \operatorname E\left[X_{(i)} X_{(j)} \mid X_{(i)}=x\right] f_{X_{(i)}}(x) \, dx \\ &= \int_{x=0}^\infty x \operatorname E\left[X_{(j)} \mid X_{(j)} \geq x\right] f_{X_{(i)}}(x) \, dx &&\left(\textrm{since}~X_{(i)} = x \implies X_{(j)} \geq x\right) \\ &= \int_{x=0}^\infty x \left[ \operatorname E\left[X_{(j)}\right] + x \right] f_{X_{(i)}}(x) \, dx &&\left(\text{by the memoryless property}\right) \\ &= \sum_{k=0}^{j-1}\frac{1}{(n - k)\lambda} \operatorname E\left[X_{(i)}\right] + \operatorname E\left[X_{(i)}^2\right]. \end{align} The first equation follows from the
law of total expectation. The second equation exploits the fact that once we condition on X_{(i)} = x , it must follow that X_{(j)} \geq x . The third equation relies on the memoryless property to replace \operatorname E\left[ X_{(j)} \mid X_{(j)} \geq x\right] with \operatorname E\left[X_{(j)}\right] + x.
Sum of two independent exponential random variables The probability distribution function (PDF) of a sum of two independent random variables is the
convolution of their individual PDFs. If X_1 and X_2 are independent exponential random variables with respective rate parameters \lambda_1 and \lambda_2, then the probability density of Z=X_1+X_2 is given by \begin{align} f_Z(z) &= \int_{-\infty}^\infty f_{X_1}(x_1) f_{X_2}(z - x_1)\,dx_1\\ &= \int_0^z \lambda_1 e^{-\lambda_1 x_1} \lambda_2 e^{-\lambda_2(z - x_1)} \, dx_1 \\ &= \lambda_1 \lambda_2 e^{-\lambda_2 z} \int_0^z e^{(\lambda_2 - \lambda_1)x_1}\,dx_1 \\ &= \begin{cases} \dfrac{\lambda_1 \lambda_2}{\lambda_2-\lambda_1} \left(e^{-\lambda_1 z} - e^{-\lambda_2 z}\right) & \text{ if } \lambda_1 \neq \lambda_2 \\[4 pt] \lambda^2 z e^{-\lambda z} & \text{ if } \lambda_1 = \lambda_2 = \lambda. \end{cases} \end{align} The entropy of this distribution is available in closed form: assuming \lambda_1 > \lambda_2 (without loss of generality), then \begin{align} H(Z) &= 1 + \gamma + \ln \left( \frac{\lambda_1 - \lambda_2}{\lambda_1 \lambda_2} \right) + \psi \left( \frac{\lambda_1}{\lambda_1 - \lambda_2} \right) , \end{align} where \gamma is the
Euler-Mascheroni constant, and \psi(\cdot) is the
digamma function. In the case of equal rate parameters, the result is an
Erlang distribution with shape 2 and parameter \lambda, which in turn is a special case of
gamma distribution. The sum of n independent Exp(
λ) exponential random variables is Gamma(n,
λ) distributed. ==Related distributions==