Density function The form of the density function of the Weibull distribution changes drastically with the value of
k. For 0 1, the density function tends to zero as
x approaches zero from above, increases until its mode and decreases after it. The density function has infinite negative slope at
x = 0 if 0 2. For
k = 1 the density has a finite negative slope at
x = 0. For
k = 2 the density has a finite positive slope at
x = 0. As
k goes to infinity, the Weibull distribution converges to a
Dirac delta distribution centered at
x = λ. Moreover, the skewness and coefficient of variation depend only on the shape parameter. A generalization of the Weibull distribution is the
hyperbolastic distribution of type III.
Cumulative distribution function The
cumulative distribution function for the Weibull distribution is :F(x;k,\lambda) = 1 - e^{-(x/\lambda)^k}\, for
x ≥ 0, and
F(
x;
k; λ) = 0 for
x −1 ≈ 0.632 for all values of
k. Vice versa: at
F(
x;
k;
λ) = 0.632 the value of
x ≈
λ. The quantile (inverse cumulative distribution) function for the Weibull distribution is :Q(p;k,\lambda) = \lambda(-\ln(1-p))^{1/k} for 0 ≤
p h(x;k,\lambda) = {k \over \lambda} \left({x \over \lambda}\right)^{k-1}. The
Mean time between failures MTBF is : \text{MTBF}(k,\lambda) = \lambda\Gamma(1+1/k).
Moments The
moment generating function of the
logarithm of a Weibull distributed
random variable is given by :\operatorname E\left[e^{t\log X}\right] = \lambda^t\Gamma\left(\frac{t}{k}+1\right) where is the
gamma function. Similarly, the
characteristic function of log
X is given by :\operatorname E\left[e^{it\log X}\right] = \lambda^{it}\Gamma\left(\frac{it}{k}+1\right). In particular, the
nth
raw moment of
X is given by :m_n = \lambda^n \Gamma\left(1+\frac{n}{k}\right). The
mean and
variance of a Weibull
random variable can be expressed as :\operatorname{E}(X) = \lambda \Gamma\left(1+\frac{1}{k}\right)\, and :\operatorname{var}(X) = \lambda^2\left[\Gamma\left(1+\frac{2}{k}\right) - \left(\Gamma\left(1+\frac{1}{k}\right)\right)^2\right]\,. The skewness is given by :\gamma_1=\frac{2\Gamma_1^3-3\Gamma_1\Gamma_2+ \Gamma_3 }{[\Gamma_2-\Gamma_1^2]^{3/2}} where \Gamma_i=\Gamma(1+i/k), which may also be written as :\gamma_1=\frac{\Gamma\left(1+\frac{3}{k}\right)\lambda^3-3\mu\sigma^2-\mu^3}{\sigma^3} where the mean is denoted by and the standard deviation is denoted by . The excess
kurtosis is given by :\gamma_2=\frac{-6\Gamma_1^4+12\Gamma_1^2\Gamma_2-3\Gamma_2^2-4\Gamma_1 \Gamma_3 +\Gamma_4}{[\Gamma_2-\Gamma_1^2]^2} where \Gamma_i=\Gamma(1+i/k). The kurtosis excess may also be written as: :\gamma_2=\frac{\lambda^4\Gamma(1+\frac{4}{k})-4\gamma_1\sigma^3\mu-6\mu^2\sigma^2-\mu^4}{\sigma^4}-3.
Moment generating function A variety of expressions are available for the moment generating function of
X itself. As a
power series, since the raw moments are already known, one has :\operatorname E\left[e^{tX}\right] = \sum_{n=0}^\infty \frac{t^n\lambda^n}{n!} \Gamma\left(1+\frac{n}{k}\right). Alternatively, one can attempt to deal directly with the integral :\operatorname E\left[e^{tX}\right] = \int_0^\infty e^{tx} \frac k \lambda \left(\frac{x}{\lambda}\right)^{k-1}e^{-(x/\lambda)^k}\,dx. If the parameter
k is assumed to be a rational number, expressed as
k =
p/
q where
p and
q are integers, then this integral can be evaluated analytically. With
t replaced by −
t, one finds : \operatorname E\left[e^{-tX}\right] = \frac1{ \lambda^k\, t^k} \, \frac{ p^k \, \sqrt{q/p}} {(\sqrt{2 \pi})^{q+p-2}} \, G_{p,q}^{\,q,p} \!\left( \left. \begin{matrix} \frac{1-k}{p}, \frac{2-k}{p}, \dots, \frac{p-k}{p} \\ \frac{0}{q}, \frac{1}{q}, \dots, \frac{q-1}{q} \end{matrix} \; \right| \, \frac {p^p} {\left( q \, \lambda^k \, t^k \right)^q} \right) where
G is the
Meijer G-function. The
characteristic function has also been obtained by Muraleedharan et al. (2007)
Minima Let X_1, X_2, \ldots, X_n be independent and identically distributed Weibull random variables with scale parameter \lambda and shape parameter k. If the minimum of these n random variables is Z = \min(X_1, X_2, \ldots, X_n), then the cumulative probability distribution of Z is given by :F(z) = 1 - e^{-n(z/\lambda)^k}. That is, Z will also be Weibull distributed with scale parameter n^{-1/k} \lambda and with shape parameter k.
Reparametrization tricks Fix some \alpha > 0. Let (\pi_1, ..., \pi_n) be nonnegative, and not all zero, and let g_1,... , g_n be independent samples of \text{Weibull}(1, \alpha^{-1}), then • \arg\min_i (g_i \pi_i^{-\alpha}) \sim \text{Categorical}\left(\frac{\pi_j}{\sum_i \pi_i}\right)_j • \min_i (g_i \pi_i^{-\alpha}) \sim\text{Weibull}\left( \left(\sum_i \pi_i \right)^{-\alpha}, \alpha^{-1}\right).
Shannon entropy The
information entropy is given by : H(\lambda,k) = \gamma\left(1 - \frac{1}{k}\right) + \ln\left(\frac{\lambda}{k}\right) + 1 where \gamma is the
Euler–Mascheroni constant. The Weibull distribution is the
maximum entropy distribution for a non-negative real random variate with a fixed
expected value of
xk equal to
λk and a fixed expected value of ln(
xk) equal to ln(
λk) − \gamma.
Kullback–Leibler divergence The
Kullback–Leibler divergence between two Weibull distributions is given by : D_\text{KL}( \mathrm{Weib}_1 \parallel \mathrm{Weib}_2) = \log \frac{k_1}{\lambda_1^{k_1}} - \log \frac{k_2}{\lambda_2^{k_2}} + (k_1 - k_2) \left[ \log \lambda_1 - \frac{\gamma}{k_1} \right] + \left(\frac{\lambda_1}{\lambda_2}\right)^{k_2} \Gamma \left(\frac{k_2}{k_1} + 1 \right) - 1 ==Parameter estimation==