MarketCoupon collector's problem
Company Profile

Coupon collector's problem

In probability theory, the coupon collector's problem refers to mathematical analysis of "collect all coupons and win" contests. It asks the following question: if each box of a given product contains a coupon, and there are n different types of coupons, what is the probability that more than t boxes need to be bought to collect all n coupons? An alternative statement is: given n coupons, how many coupons do you expect you need to draw with replacement before having drawn each coupon at least once? The mathematical analysis of the problem reveals that the expected number of trials needed grows as . For example, when n = 50 it takes about 225 trials on average to collect all 50 coupons. Sometimes the problem is instead expressed in terms of an n-sided die.

Solution
Calculating the expectation Let time be the number of draws needed to collect all coupons, and let be the time to collect the -th coupon after coupons have been collected. Then T=t_1 + \cdots + t_n. Think of and as random variables. Observe that the probability of collecting the -th coupon is p_i = \frac{n - (i - 1)}{n} = \frac{n - i + 1}n. Therefore, t_i has geometric distribution with expectation \frac{1}{p_i} = \frac n{n - i + 1}. By the linearity of expectations we have: : \begin{align} \operatorname{E}(T) & {}= \operatorname{E}(t_1 + t_2 + \cdots + t_n) \\ & {}= \operatorname{E}(t_1) + \operatorname{E}(t_2) + \cdots + \operatorname{E}(t_n) \\ & {}= \frac{1}{p_1} + \frac{1}{p_2} + \cdots + \frac{1}{p_n} \\ & {}= \frac{n}{n} + \frac{n}{n-1} + \cdots + \frac{n}{1} \\ & {}= n \cdot \left(\frac{1}{1} + \frac{1}{2} + \cdots + \frac{1}{n}\right) \\ & {}= n \cdot H_n. \end{align} Here is the -th harmonic number. Using the asymptotics of the harmonic numbers, we obtain: : \operatorname{E}(T) = n \cdot H_n = n \log n + \gamma n + \frac{1}{2} + O(1/n), where \gamma \approx 0.5772156649 is the Euler–Mascheroni constant. Using the Markov inequality to bound the desired probability: :\operatorname{P}(T \geq cn H_n) \le \frac{1}{c}. The above can be modified slightly to handle the case when we've already collected some of the coupons. Let be the number of coupons already collected, then: : \begin{align} \operatorname{E}(T_k) & {}= \operatorname{E}(t_{k+1} + t_{k+2} + \cdots + t_n) \\ & {}= n \cdot \left(\frac{1}{1} + \frac{1}{2} + \cdots + \frac{1}{n-k}\right) \\ & {}= n \cdot H_{n-k} \end{align} And when k=0 then we get the original result. Calculating the variance Using the independence of random variables , we obtain: : \begin{align} \operatorname{Var}(T)& {}= \operatorname{Var}(t_1 + \cdots + t_n) \\ & {} = \operatorname{Var}(t_1) + \operatorname{Var}(t_2) + \cdots + \operatorname{Var}(t_n) \\ & {} = \frac{1-p_1}{p_1^2} + \frac{1-p_2}{p_2^2} + \cdots + \frac{1-p_n}{p_n^2} \\ & {} = \left(\frac{n^2}{n^2} + \frac{n^2}{(n-1)^2} + \cdots + \frac{n^2}{1^2}\right) - \left(\frac{n}{n} + \frac{n}{n-1} + \cdots + \frac{n}{1}\right) \\ & {} = n^2 \cdot \left(\frac{1}{1^2} + \frac{1}{2^2} + \cdots + \frac{1}{n^2} \right) - n \cdot \left(\frac{1}{1} + \frac{1}{2} + \cdots + \frac{1}{n} \right)\\ & {} since \frac{\pi^2}6=\frac{1}{1^2}+\frac{1}{2^2}+\cdots+\frac{1}{n^2}+\cdots (see Basel problem). Bound the desired probability using the Chebyshev inequality: :\operatorname{P}\left(|T- n H_n| \geq cn\right) \le \frac{\pi^2}{6c^2}. ==Stirling numbers==
Stirling numbers
Let the random variable be the number of dice rolls performed before all faces have occurred. The subpower is defined a^{\{b\}}=a!\left\{{b\atop a}\right\}, where \left\{{b\atop a}\right\} is a Stirling number of the second kind. Sequences of k die rolls are functions k\rightarrow n counted by n^k, while surjections (that land on each face at least once) are counted by n^{\{k\}}, so the probability that all faces were landed on within the -th throw is P(X\le k)=\frac{n^{\{k\}}}{n^k}. By the recurrence relation of the Stirling numbers, the probability that exactly rolls are needed is P(X=k)=\frac{n^{\{k\}}}{n^k}-\frac{n^{\{k-1\}}}{n^{k-1}}=\frac{(n-1)^{\{k-1\}}}{n^{k-1}} Generating functions Replacing z with 1+z in the probability generating function produces the o.g.f. for E\left[{X\choose k}\right]. Using the partial fraction decomposition {\frac1x-1\choose n}^{-1}=\sum_{k=0}^n{n\choose k}\frac{(-1)^{n-k}}{1-kx}, we can take the expansion :\begin{aligned}&{\frac n{x+1}\choose n}^{-1}\\ =&\sum_{i=0}^n{n\choose i}\frac{(-1)^{n-i}}{1-i(1-\frac n{x+1+n})}\\ =&\sum_{i=0}^n{n\choose i}(-1)^{n-i}\left(\frac{1+n}{1+n-i}+in\sum_{k=1}^\infty\frac{(i-1)^{k-1}}{(n+1-i)^{k+1}}x^k\right)\end{aligned} revealing that for k>0, :E\left[{X\choose k}\right]=n\sum_{i=0}^n{n\choose i}(-1)^{n-i}i\frac{(i-1)^{k-1}}{(n+1-i)^{k+1}} Given an o.g.f. , since \left(\frac x{1-x}\right)^i=\sum_{n=0}^\infty{k-1\choose i-1}x^k, a variation of the binomial transform is [x^k]f\left(\frac x{1+x}\right)=\sum_{i=0}^k{k-1\choose i-1}(-1)^{k-i}[x^i]f(x). (Specifically, if {\frac n{x+1}\choose n}^{-1}=f\left(\frac x{1+x}\right), f(x)={n-nx\choose n}^{-1}.) Rewriting the binomial coefficient via the gamma function and expanding as the \exp of the polygamma series (in terms of generalised harmonic numbers), we find \left[\frac{x^i}{i!}\right]{n-x\choose n}^{-1}=\sum_{P\in\mathrm{perms}(i)}\prod_{c\in P}H^{(|c|)}_n, so :E\left[{X\choose k}\right]=\sum_{i=0}^k{k-1\choose i-1}(-1)^{k-i}\frac{n^i}{i!}\sum_{P\in\mathrm{perms}(i)}\prod_{c\in P}H^{(|c|)}_n which can also be written with the falling factorial and Lah numbers as :E[x^\underline k]=\sum_{i=0}^kL(k,i)(-1)^{k-i}n^i\sum_{P\in\mathrm{perms}(i)}\prod_{c\in P}H^{(|c|)}_n The raw moments of the distribution can be obtained from the falling moments via a Stirling transform; due to the identity \left\{{K\atop i}\right\}(-1)^K=\sum_{k=0}^K\left\{{K\atop k}\right\}L(k,i)(-1)^k, this provides :E[x^k]=\sum_{i=0}^k\left\{{k\atop i}\right\}(-1)^{k-i}n^i\!\!\sum_{P\in\mathrm{perms}(i)}\prod_{c\in P}H^{(|c|)}_n ==Tail estimates==
Tail estimates
A stronger tail estimate for the upper tail can be obtained as follows. Let {Z}_i^r denote the event that the i-th coupon was not picked in the first r trials. Then : \begin{align} P\left [ {Z}_i^r \right ] = \left(1-\frac{1}{n}\right)^r \le e^{-r / n}. \end{align} Thus, for r = \beta n \log n, we have P\left [ {Z}_i^r \right ] \le e^{(-\beta n \log n ) / n} = n^{-\beta}. Via a union bound over the n coupons, we obtain : \begin{align} P\left [ T > \beta n \log n \right ] = P \left [ \bigcup_i {Z}_i^{\beta n \log n} \right ] \le n \cdot P [ {Z}_1^{\beta n \log n} ] \le n^{-\beta + 1}. \end{align} ==Extensions and generalizations==
Extensions and generalizations
Pierre-Simon Laplace, but also Paul Erdős and Alfréd Rényi, proved the limit theorem for the distribution of . This result is a further extension of previous bounds. A proof is found in. ::\operatorname{P}(T which is a Gumbel distribution. A simple proof by martingales is in . . • Donald J. Newman and Lawrence Shepp gave a generalization of the coupon collector's problem when copies of each coupon need to be collected. Let be the first time copies of each coupon are collected. They showed that the expectation in this case satisfies: ::\operatorname{E}(T_m) = n \log n + (m-1) n \log\log n + O(n), \text{ as } n \to \infty. :Here is fixed. When we get the earlier formula for the expectation. • Common generalization, also due to Erdős and Rényi: ::\operatorname{P}\left(T_m • In the general case of a nonuniform probability distribution, according to Philippe Flajolet et al. ::\operatorname{E}(T)=\int_0^\infty \left(1 - \prod_{i=1}^m \left(1-e^{-p_it}\right)\right)dt. :This is equal to ::\operatorname{E}(T)=\sum_{q=0}^{m-1} (-1)^{m-1-q} \sum_{|J|=q} \frac{1}{1-P_J}, :where denotes the number of coupons to be collected and denotes the probability of getting any coupon in the set of coupons . ==See also==
tickerdossier.comtickerdossier.substack.com