The
Type IV generalized logistic, or
logistic-beta Symmetry If x\sim B_\sigma(\alpha,\beta), then -x\sim B_\sigma(\beta,\alpha).
Normal variance-mean mixture representation Logistic-beta distribution admits the following normal variance-mean mixture representation: : f(x;\alpha,\beta)= \frac{1}{B(\alpha,\beta)}\frac{e^{-\beta x}}{(1+e^{-x})^{\alpha+\beta}} = \int_0^\infty N(x; 0.5\lambda(\alpha - \beta), \lambda) p_{\text{Polya}}(\lambda; \alpha, \beta) d\lambda where N(x; \mu, \lambda) is a normal density with mean \mu, variance \lambda, and p_{\text{Polya}}(\lambda; \alpha,\beta) is a density of Polya distribution with parameters \alpha,\beta>0, defined as \lambda \stackrel{d}{=} \sum_{k=0}^\infty 2\epsilon_k/\{(k+\alpha)(k+\beta)\}, \epsilon_k\stackrel{iid}{\sim}\text{Exp}(1) .
Mean and variance By using the
logarithmic expectations of the gamma distribution, the mean and variance can be derived as: : \begin{align} \text{E}[x] &= \psi(\alpha) - \psi(\beta) \\ \text{var}[x] &= \psi'(\alpha) + \psi'(\beta) \\ \end{align} where \psi is the
digamma function, while \psi'=\psi^{(1)} is its first derivative, also known as the
trigamma function, or the first
polygamma function. Since \psi is
strictly increasing, the sign of the mean is the same as the sign of \alpha-\beta. Since \psi' is strictly decreasing, the shape parameters can also be interpreted as concentration parameters. Indeed, as shown below, the left and right tails respectively become thinner as \alpha or \beta are increased. The two terms of the variance represent the contributions to the variance of the left and right parts of the distribution.
Cumulants and skewness The
cumulant generating function is K(t)=\ln M(t), where the moment generating function M(t) is given
above. The
cumulants, \kappa_n, are the n-th derivatives of K(t), evaluated at t=0: : \kappa_n = K^{(n)}(0) = \psi^{(n-1)}(\alpha) + (-1)^{n} \psi^{(n-1)}(\beta) where \psi^{(0)}=\psi and \psi^{(n-1)} are the digamma and polygamma functions. In agreement with the derivation above, the first cumulant, \kappa_1, is the mean and the second, \kappa_2, is the variance. The third cumulant, \kappa_3, is the third central moment E[(x-E[x])^3], which when scaled by the third power of the standard deviation gives the
skewness: : \text{skew}[x] = \frac{\psi^{(2)}(\alpha) - \psi^{(2)}(\beta)}{\sqrt{\text{var}[x]}^3} The sign (and therefore the
handedness) of the skewness is the same as the sign of \alpha-\beta.
Mode The mode (pdf maximum) can be derived by finding x where the log pdf derivative is zero: : \frac{d}{dx}\ln f(x;\alpha,\beta) = \alpha\sigma(-x) -\beta\sigma(x) = 0 This simplifies to \alpha/\beta=e^x, so that: : \begin{align} E[\log\sigma(x)] &= \frac{\partial\log B(\alpha,\beta)}{\partial\alpha} = \psi(\alpha) - \psi(\alpha+\beta) \\ E[\log\sigma(-x)] &= \frac{\partial\log B(\alpha,\beta)}{\partial\beta} = \psi(\beta) - \psi(\alpha+\beta) \\ \end{align} Given a data set x_1,\ldots,x_n assumed to have been generated
IID from B_\sigma(\alpha,\beta), the
maximum-likelihood parameter estimate is: : \begin{align} \hat\alpha,\hat\beta = \arg\max_{\alpha,\beta} &\;\frac1n\sum_{i=1}^n \log f(x_i;\alpha,\beta) \\ =\arg\max_{\alpha,\beta} &\;\alpha\Bigl(\frac1n\sum_i\log\sigma(x_i)\Bigr) + \beta\Bigl(\frac1n\sum_i\log\sigma(-x_i)\Bigr) -\log B(\alpha,\beta)\\ =\arg\max_{\alpha,\beta}&\;\alpha\,\overline{\log\sigma(x)} + \beta\,\overline{\log\sigma(-x)} -\log B(\alpha,\beta) \end{align} where the overlines denote the averages of the sufficient statistics. The maximum-likelihood estimate depends on the data only via these average statistics. Indeed, at the maximum-likelihood estimate the expected values and averages agree: : \begin{align} \psi(\hat\alpha) - \psi(\hat\alpha+\hat\beta) &= \overline{\log\sigma(x)} \\ \psi(\hat\beta) - \psi(\hat\alpha+\hat\beta) &= \overline{\log\sigma(-x)} \\ \end{align} which is also where the partial derivatives of the above maximand vanish.
Relationships with other distributions Relationships with other distributions include: • The log-ratio of gamma variates is of
type IV as detailed
above. • If y\sim\text{BetaPrime}(\alpha,\beta), then x=\ln y has a
type IV distribution, with parameters \alpha and \beta. See
beta prime distribution. • If z\sim\text{Gamma}(\beta,1) and y\mid z\sim\text{Gamma}(\alpha,z), where z is used as the rate parameter of the second gamma distribution, then y has a
compound gamma distribution, which is the same as \text{BetaPrime}(\alpha,\beta), so that x=\ln y has a
type IV distribution. • If p\sim\text{Beta}(\alpha,\beta), then x=\text{logit}\, p has a
type IV distribution, with parameters \alpha and \beta. See
beta distribution. The
logit function, \mathrm{logit}(p) = \log\frac{p}{1-p} is the inverse of the
logistic function. This relationship explains the name
logistic-beta for this distribution: if the logistic function is applied to logistic-beta variates, the transformed distribution is beta.
Large shape parameters For large values of the shape parameters, \alpha,\beta\gg1, the distribution becomes more
Gaussian, with: : \begin{align} E[x]&\approx\ln\frac{\alpha}{\beta} \\ \text{var}[x] &\approx\frac{\alpha+\beta}{\alpha\beta} \end{align} This is demonstrated in the pdf and log pdf plots here.
Random variate generation Since
random sampling from the gamma and
beta distributions are readily available on many software platforms, the above relationships with those distributions can be used to generate variates from the type IV distribution. == Generalization with location and scale parameters ==