MarketGeometric distribution
Company Profile

Geometric distribution

In probability theory and statistics, the geometric distribution is either one of two discrete probability distributions:The probability distribution of the number of Bernoulli trials needed to get one success, supported on ; The probability distribution of the number of failures before the first success, supported on .

Definition
The geometric distribution is the discrete probability distribution that describes when the first success in an infinite sequence of independent and identically distributed Bernoulli trials occurs. Its probability mass function depends on its parameterization and support. When supported on \mathbb{N}, the probability mass function is P(X = k) = (1 - p)^{k-1} p where k = 1, 2, 3, \dotsc is the number of trials and p is the probability of success in each trial. The support may also be \mathbb{N}_0, defining Y=X-1. This alters the probability mass function into P(Y = k) = (1 - p)^k p where k = 0, 1, 2, \dotsc is the number of failures before the first success. An alternative parameterization of the distribution gives the probability mass function P(Y = k) = \left(\frac{P}{Q}\right)^k \left(1-\frac{P}{Q}\right) where P = \frac{1-p}{p} and Q = \frac{1}{p}. An example of a geometric distribution arises from rolling a six-sided die until a "1" appears. Each roll is independent with a 1/6 chance of success. The number of rolls needed follows a geometric distribution with p=1/6. ==Properties ==
Properties
Memorylessness The geometric distribution is the only memoryless discrete probability distribution. It is the discrete version of the same property found in the exponential distribution. The property asserts that the number of previously failed trials does not affect the number of future trials needed for a success. Because there are two definitions of the geometric distribution, there are also two definitions of memorylessness for discrete random variables. Expressed in terms of conditional probability, the two definitions are \Pr(X>m+n\mid X>n)=\Pr(X>m), and \Pr(Y>m+n\mid Y\geq n)=\Pr(Y>m), where m and n are natural numbers, X is a geometrically distributed random variable defined over \mathbb{N}, and Y is a geometrically distributed random variable defined over \mathbb{N}_0. Note that these definitions are not equivalent for discrete random variables; Y does not satisfy the first equation and X does not satisfy the second. Moments and cumulants The expected value and variance of a geometrically distributed random variable X defined over \mathbb{N} is For example, when rolling a six-sided die until landing on a "1", the average number of rolls needed is \frac{1}{1/6} = 6 and the average number of failures is {{nowrap|\frac{1 - 1/6}{1/6} = 5.}} The moment generating function of the geometric distribution when defined over \mathbb{N} and \mathbb{N}_0 respectively is The cumulant generating function of the geometric distribution defined over \mathbb{N}_0 is and \left\lfloor-\frac{\log 2}{\log(1-p)}\right\rfloor when defined over \mathbb{N}_0. Therefore, the excess kurtosis of the geometric distribution is 6 + \frac{p^2}{1-p}. Since \frac{p^2}{1-p} \geq 0, the excess kurtosis is always positive so the distribution is leptokurtic. ==Entropy and Fisher's information==
Entropy and Fisher's information
Entropy (geometric distribution, failures before success) Entropy is a measure of uncertainty in a probability distribution. For the geometric distribution that models the number of failures before the first success, the probability mass function is: P(X = k) = (1 - p)^k p, \quad k = 0, 1, 2, \dots The entropy H(X) for this distribution is defined as: \begin{align} H(X) &= - \sum_{k=0}^\infty P(X = k) \ln P(X = k) \\ &= - \sum_{k=0}^\infty (1 - p)^k p \ln \left( (1 - p)^k p \right) \\ &= - \sum_{k=0}^\infty (1 - p)^k p \left[ k \ln(1 - p) + \ln p \right] \\ &= -\log p - \frac{1 - p}{p} \log(1 - p) \end{align} The entropy increases as the probability p decreases, reflecting greater uncertainty as success becomes rarer. Fisher's information (geometric distribution, failures before success) Fisher information measures the amount of information that an observable random variable X carries about an unknown parameter p. For the geometric distribution (failures before the first success), the Fisher information with respect to p is given by: I(p) = \frac{1}{p^2(1 - p)} Proof: • The likelihood function for a geometric random variable X is: L(p; X) = (1 - p)^X p • The log-likelihood function is: \ln L(p; X) = X \ln(1 - p) + \ln p • The score function (first derivative of the log-likelihood w.r.t. p) is: \frac{\partial}{\partial p} \ln L(p; X) = \frac{1}{p} - \frac{X}{1 - p} • The second derivative of the log-likelihood function is: \frac{\partial^2}{\partial p^2} \ln L(p; X) = -\frac{1}{p^2} - \frac{X}{(1 - p)^2} • Fisher information is calculated as the negative expected value of the second derivative: \begin{align} I(p) &= -E\left[\frac{\partial^2}{\partial p^2} \ln L(p; X)\right] \\ &= - \left(-\frac{1}{p^2} - \frac{1 - p}{p (1 - p)^2} \right) \\ &= \frac{1}{p^2(1 - p)} \end{align} Fisher information increases as p decreases, indicating that rarer successes provide more information about the parameter p. Entropy (geometric distribution, trials until success) For the geometric distribution modeling the number of trials until the first success, the probability mass function is: P(X = k) = (1 - p)^{k - 1} p, \quad k = 1, 2, 3, \dots The entropy H(X) for this distribution is the same as that of version modeling trials until failure, \begin{align} H(X) &= - \log p - \frac{1 - p}{p} \log(1 - p) \end{align} Fisher's information (geometric distribution, trials until success) Fisher information for the geometric distribution modeling the number of trials until the first success is given by: I(p) = \frac{1}{p^2(1 - p)} Proof: • The likelihood function for a geometric random variable X is: :: L(p; X) = (1 - p)^{X - 1} p • The log-likelihood function is: : \ln L(p; X) = (X - 1) \ln(1 - p) + \ln p • The score function (first derivative of the log-likelihood w.r.t. p) is: :: \frac{\partial}{\partial p} \ln L(p; X) = \frac{1}{p} - \frac{X - 1}{1 - p} • The second derivative of the log-likelihood function is: :: \frac{\partial^2}{\partial p^2} \ln L(p; X) = -\frac{1}{p^2} - \frac{X - 1}{(1 - p)^2} • Fisher information is calculated as the negative expected value of the second derivative: \begin{align} I(p) &= -E\left[\frac{\partial^2}{\partial p^2} \ln L(p; X)\right] \\ &= - \left(-\frac{1}{p^2} - \frac{1 - p}{p (1 - p)^2} \right) \\ &= \frac{1}{p^2(1 - p)} \end{align} General properties • The probability generating functions of geometric random variables X and Y defined over \mathbb{N} and \mathbb{N}_0 are, respectively,\begin{align} \varphi_X(t) &= \frac{pe^{it}}{1-(1-p)e^{it}},\\[10pt] \varphi_Y(t) &= \frac{p}{1-(1-p)e^{it}}. \end{align} • The entropy of a geometric distribution with parameter p is • The geometric distribution defined on \mathbb{N}_0 is infinitely divisible, that is, for any positive integer n, there exist n independent identically distributed random variables whose sum is also geometrically distributed. This is because the negative binomial distribution can be derived from a Poisson-stopped sum of logarithmic random variables. ==Related distributions==
Related distributions
• The sum of r independent geometric random variables with parameter p is a negative binomial random variable with parameters r and p. The geometric distribution is a special case of the negative binomial distribution, with r=1. • The geometric distribution is a special case of discrete compound Poisson distribution. • Suppose 0 k has a Poisson distribution with expected value rk/k. Then \sum_{k=1}^\infty k\,X_k has a geometric distribution taking values in \mathbb{N}_0, with expected value r/(1 − r). • The exponential distribution is the continuous analogue of the geometric distribution. Applying the floor function to the exponential distribution with parameter \lambda creates a geometric distribution with parameter p=1-e^{-\lambda} defined over \mathbb{N}_0. This can be used to generate geometrically distributed random numbers as detailed in § Random variate generation. • If p = 1/n and X is geometrically distributed with parameter p, then the distribution of X/n approaches an exponential distribution with expected value 1 as n → ∞, since \begin{align} \Pr(X/n>a)=\Pr(X>na) & = (1-p)^{na} = \left(1-\frac 1 n \right)^{na} = \left[ \left( 1-\frac 1 n \right)^n \right]^{a} \\ & \to [e^{-1}]^{a} = e^{-a} \text{ as } n\to\infty. \end{align} More generally, if p = λ/n, where λ is a parameter, then as n→ ∞ the distribution of X/n approaches an exponential distribution with rate λ:\Pr(X>nx)=\lim_{n \to \infty}(1-\lambda /n)^{nx}=e^{-\lambda x} therefore the distribution function of X/n converges to 1-e^{-\lambda x}, which is that of an exponential random variable. • The index of dispersion of the geometric distribution is \frac{1}{p} and its coefficient of variation is \frac{1}{\sqrt{1-p}}. The distribution is overdispersed. ==Statistical inference==
Statistical inference
The true parameter p of an unknown geometric distribution can be inferred through estimators and conjugate distributions. Method of moments Provided they exist, the first l moments of a probability distribution can be estimated from a sample x_1, \dotsc, x_n using the formulam_i = \frac{1}{n} \sum_{j=1}^n x^i_jwhere m_i is the ith sample moment and 1 \leq i \leq l. Estimating \mathrm{E}(X) with m_1 gives the sample mean, denoted \bar{x} . Substituting this estimate in the formula for the expected value of a geometric distribution and solving for p gives the estimators \hat{p} = \frac{1}{\bar{x}} and \hat{p} = \frac{1}{\bar{x}+1} when supported on \mathbb{N} and \mathbb{N}_0 respectively. These estimators are biased since \mathrm{E}\left(\frac{1}{\bar{x}}\right) > \frac{1}{\mathrm{E}(\bar{x})} = p as a result of Jensen's inequality. Maximum likelihood estimation The maximum likelihood estimator of p is the value that maximizes the likelihood function given a sample. If the domain is \mathbb{N}_0, then the estimator shifts to \hat{p} = \frac{1}{\bar{x}+1}. As previously discussed in § Method of moments, these estimators are biased. Regardless of the domain, the bias is equal to b \equiv \operatorname{E}\bigg[\;(\hat p_\mathrm{mle} - p)\;\bigg] = \frac{p\,(1-p)}{n} which yields the bias-corrected maximum likelihood estimator, \hat{p\,}^*_\text{mle} = \hat{p\,}_\text{mle} - \hat{b\,} Bayesian inference In Bayesian inference, the parameter p is a random variable from a prior distribution with a posterior distribution calculated using Bayes' theorem after observing samples.p \sim \mathrm{Beta}\left(\alpha+n,\ \beta+\sum_{i=1}^n (k_i-1)\right). \!Alternatively, if the samples are in \mathbb{N}_0, the posterior distribution isp \sim \mathrm{Beta}\left(\alpha+n,\beta+\sum_{i=1}^n k_i\right).Since the expected value of a \mathrm{Beta}(\alpha,\beta) distribution is \frac{\alpha}{\alpha+\beta}, as \alpha and \beta approach zero, the posterior mean approaches its maximum likelihood estimate. == Random variate generation ==
Random variate generation
The geometric distribution can be generated experimentally from i.i.d. standard uniform random variables by finding the first such random variable to be less than or equal to p. However, the number of random variables needed is also geometrically distributed and the algorithm slows as p decreases. Random generation can be done in constant time by truncating exponential random numbers. An exponential random variable E can become geometrically distributed with parameter p through \lceil -E/\log(1-p) \rceil. In turn, E can be generated from a standard uniform random variable U altering the formula into \lceil \log(U) / \log(1-p)\rceil. == Applications ==
Applications
The geometric distribution is used in many disciplines. In queueing theory, the M/M/1 queue has a steady state following a geometric distribution. In stochastic processes, the Yule Furry process is geometrically distributed. The distribution also arises when modeling the lifetime of a device in discrete contexts. It has also been used to fit data including modeling patients spreading COVID-19. == See also ==
tickerdossier.comtickerdossier.substack.com