Mean and variance The mean of gamma distribution is given by the product of its shape and scale parameters: \mu = \alpha\theta = \alpha/\beta The variance is: \sigma^2 = \alpha \theta^2 = \alpha/\beta^2 The square root of the inverse shape parameter gives the
coefficient of variation: \sigma/\mu = \alpha^{-0.5} = 1/\sqrt{\alpha}
Skewness The
skewness of the gamma distribution only depends on its shape parameter, , and it is equal to 2/\sqrt{\alpha}.
Higher moments The -th
raw moment is given by: : \mathrm{E}[X^r] = \theta^r \frac{\Gamma(\alpha+r)}{\Gamma(\alpha)} = \theta^r \alpha^\overline{r} with \alpha^\overline{r} the
rising factorial.
Cumulants The -th
cumulant is given by: : \kappa_r = \theta^r \alpha (r-1)! = \theta^r \alpha \Gamma(r).
Median approximations and bounds Unlike the mode and the mean, which have readily calculable formulas based on the parameters, the median does not have a closed-form equation. The median for this distribution is the value \nu such that \frac{1}{\Gamma(\alpha) \theta^\alpha} \int_0^{\nu} x^{\alpha - 1} e^{-x/\theta} dx = \frac{1}{2}. A rigorous treatment of the problem of determining an asymptotic expansion and bounds for the median of the gamma distribution was handled first by Chen and Rubin, who proved that (for \theta = 1) \alpha - \tfrac{1}{3} where \mu(\alpha) = \alpha is the mean and \nu(\alpha) is the median of the \text{Gamma}(\alpha,1) distribution. For other values of the scale parameter, the mean scales to \mu = \alpha\theta, and the median bounds and approximations would be similarly scaled by . K. P. Choi found the first five terms in a
Laurent series asymptotic approximation of the median by comparing the median to
Ramanujan's \theta function. Berg and Pedersen found more terms: \begin{align} \nu(\alpha) = \alpha & - \frac{1}{3} + \frac{8}{405} \alpha^{-1} + \frac{184} \alpha^{-2} + \frac{2248} \alpha^{-3} \\[1ex] & - \frac{19\,006\,408} \alpha^{-4} - \mathcal{O}{\left(\alpha^{-5}\right)} + \cdots \end{align} of upper (solid) and lower (dashed) bounds to the median of a gamma distribution and the gaps between them. The green, yellow, and cyan regions represent the gap before the Lyon 2021 paper. The green and yellow narrow that gap with the lower bounds that Lyon proved. Lyon's bounds proved in 2023 further narrow the yellow. Mostly within the yellow, closed-form rational-function-interpolated conjectured bounds are plotted along with the numerically calculated median (dotted) value. Tighter interpolated bounds exist but are not plotted, as they would not be resolved at this scale. Partial sums of these series are good approximations for high enough ; they are not plotted in the figure, which is focused on the low- region that is less well approximated. Berg and Pedersen also proved many properties of the median, showing that it is a convex function of , and that the asymptotic behavior near \alpha = 0 is \nu(\alpha) \approx e^{-\gamma}2^{-1/\alpha} (where is the
Euler–Mascheroni constant), and that for all \alpha > 0 the median is bounded by \alpha 2^{-1/\alpha} . relying on the Berg and Pedersen result that the slope of \nu(\alpha) is everywhere less than 1: \nu(\alpha) \le \alpha - 1 + \log2 ~~ for \alpha \ge 1 (with equality at \alpha = 1) which can be extended to a bound for all \alpha > 0 by taking the max with the chord shown in the figure, since the median was proved convex. In particular, he proposed these closed-form bounds, which he proved in 2023: \nu_{L\infty}(\alpha) = 2^{-1/\alpha} \left(\log 2 - \tfrac{1}{3} + \alpha\right) is a lower bound, asymptotically tight as \alpha \to \infty \nu_U(\alpha) = 2^{-1/\alpha}(e^{-\gamma} + \alpha) \quad is an upper bound, asymptotically tight as \alpha \to 0 Lyon also showed (informally in 2021, rigorously in 2023) two other lower bounds that are not
closed-form expressions, including this one involving the
gamma function, based on solving the integral expression substituting 1 for e^{-x}: \nu(\alpha) > \left( \frac{2}{\Gamma(\alpha+1)} \right)^{-1/\alpha} (approaching equality as k \to 0) and the tangent line at \alpha = 1 where the derivative was found to be \nu^\prime(1) \approx 0.9680448: \nu(\alpha) \ge \nu(1) + (\alpha-1) \nu^\prime(1) \quad (with equality at k = 1) \nu(\alpha) \ge \log 2 + (\alpha-1) \left[\gamma - 2 \operatorname{Ei}(-\log 2) - \log \log 2\right] where Ei is the
exponential integral. or Moschopoulos. The gamma distribution exhibits
infinite divisibility.
Scaling If X \sim \mathrm{Gamma}(\alpha, \theta), then, for any , cX \sim \mathrm{Gamma}(\alpha, c\,\theta), by moment generating functions, or equivalently, if X \sim \mathrm{Gamma}\left( \alpha,\beta \right) (shape-rate parameterization) cX \sim \mathrm{Gamma}\left( \alpha, \frac \beta c \right), Indeed, we know that if is an
exponential r.v. with rate , then is an exponential r.v. with rate ; the same thing is valid with Gamma variates (and this can be checked using the
moment-generating function, see, e.g.,these notes, 10.4-(ii)): multiplication by a positive constant divides the rate (or, equivalently, multiplies the scale).
Exponential family The gamma distribution is a two-parameter
exponential family with
natural parameters and (equivalently, and ), and
natural statistics and . If the shape parameter is held fixed, the resulting one-parameter family of distributions is a
natural exponential family.
Logarithmic expectation and variance One can show that \operatorname{E}[\ln X] = \psi(\alpha) - \ln \beta or equivalently, \operatorname{E}[\ln X] = \psi(\alpha) + \ln \theta where is the
digamma function. Likewise, \operatorname{var}[\ln X] = \psi^{(1)}(\alpha) where \psi^{(1)} is the
trigamma function. This can be derived using the
exponential family formula for the
moment generating function of the sufficient statistic, because one of the sufficient statistics of the gamma distribution is .
Information entropy The
information entropy is \begin{align} \operatorname{H}(X) & = \operatorname{E}[-\ln p(X)] \\[4pt] & = \operatorname{E}[-\alpha \ln \beta + \ln \Gamma(\alpha) - (\alpha-1)\ln X + \beta X] \\[4pt] & = \alpha - \ln \beta + \ln \Gamma(\alpha) + (1-\alpha)\psi(\alpha). \end{align} In the , parameterization, the
information entropy is given by \operatorname{H}(X) =\alpha + \ln \theta + \ln \Gamma(\alpha) + (1-\alpha)\psi(\alpha).
Kullback–Leibler divergence The
Kullback–Leibler divergence (KL-divergence), of ("true" distribution) from ("approximating" distribution) is given by \begin{align} D_{\mathrm{KL}}(\alpha_p,\beta_p; \alpha_q, \beta_q) = {} & (\alpha_p-\alpha_q) \psi(\alpha_p) - \log\frac{\Gamma(\alpha_p)}{\Gamma(\alpha_q)} \\ & {} + \alpha_q \log\frac{\beta_p}{\beta_q} + \alpha_p\left(\frac{\beta_q}{\beta_p} - 1\right). \end{align} Written using the , parameterization, the KL-divergence of from is given by \begin{align} D_{\mathrm{KL}}(\alpha_p,\theta_p; \alpha_q, \theta_q) = {} & (\alpha_p-\alpha_q)\psi(\alpha_p) - \log\frac{\Gamma(\alpha_p)}{\Gamma(\alpha_q)} \\ & {} + \alpha_q \log\frac{\theta_q}{\theta_p} + \alpha_p \left(\frac{\theta_p}{\theta_q} - 1 \right). \end{align}
Laplace transform The
Laplace transform of the gamma PDF, which is the
moment-generating function of the gamma distribution, is F(s) = \operatorname E\left[ e^{-sX} \right] = \frac{1}{\left(1 + \theta s\right)^\alpha} = \left( \frac\beta{ \beta + s} \right)^\alpha (where X is a random variable with that distribution). ==Related distributions==