Euler's proof Euler's proof works by first taking the natural logarithm of each side, then using the
Taylor series expansion for as well as the sum of a converging series: \begin{align} \log \left( \sum_{n=1}^\infty \frac{1}{n}\right) & {} = \log\left( \prod_p \frac{1}{1-p^{-1}}\right) = -\sum_p \log \left( 1-\frac{1}{p}\right) \\[5pt] & = \sum_p \left( \frac{1}{p} + \frac{1}{2p^2} + \frac{1}{3p^3} + \cdots \right) \\[5pt] & = \sum_{p}\frac{1}{p} + \frac{1}{2}\sum_p \frac{1}{p^2} + \frac{1}{3}\sum_p \frac{1}{p^3} + \frac{1}{4}\sum_p \frac{1}{p^4}+ \cdots \\[5pt] & = A + \frac{1}{2} B+ \frac{1}{3} C+ \frac{1}{4} D + \cdots \\[5pt] & = A + K \end{align} for a fixed constant . Then, by using the following relation: \sum_{n=1}^\infty\frac{1}{n} = \log\infty, of which, as shown in a later 1748 work, the right hand side can be obtained by setting in the Taylor series expansion \log\left(\frac1{1-x}\right)=\sum_{n=1}^\infty\frac{x^{n}}n. Thus, A = \frac{1}{2} + \frac{1}{3} + \frac{1}{5} + \frac{1}{7} + \frac{1}{11} + \cdots = \log \log \infty. It is almost certain that Euler meant that the sum of the reciprocals of the primes less than is asymptotic to as approaches infinity. It turns out this is indeed the case, and a more precise version of this fact was rigorously proved by
Franz Mertens in 1874. Thus Euler obtained a correct result by questionable means.
Erdős's proof by upper and lower estimates The following
proof by contradiction comes from
Paul Erdős. Let denote the th prime number. Assume that the
sum of the reciprocals of the primes
converges. Then there exists a smallest
positive integer such that \sum_{i=k+1}^\infty \frac 1 {p_i} For a positive integer , let denote the set of those in {{math|1={1, 2, ...,
x} }} which are not
divisible by any prime greater than (or equivalently all which are a product of powers of primes ). We will now derive an upper and a lower estimate for , the
number of elements in . For large , these bounds will turn out to be contradictory. ;Upper estimate: :Every in can be written as with positive integers and , where is
square-free. Since only the primes can show up (with exponent 1) in the
prime factorization of , there are at most different possibilities for . Furthermore, there are at most possible values for . This gives us the upper estimate |M_x| \le 2^k\sqrt{x} \qquad(2) ;Lower estimate: :The remaining numbers in the
set difference {{math|{1, 2, ...,
x} \
Mx}} are all divisible by a prime greater than . Let denote the set of those in {{math|{1, 2, ...,
x}} which are divisible by the th prime . Then \{1,2,\ldots,x\}\setminus M_x = \bigcup_{i=k+1}^\infty N_{i,x} :Since the number of integers in is at most (actually zero for ), we get x-|M_x| \le \sum_{i=k+1}^\infty |N_{i,x}| :Using (1), this implies \frac x 2 This produces a contradiction: when , the estimates (2) and (3) cannot both hold, because .
Proof that the series exhibits log-log growth Here is another proof that actually gives a lower estimate for the partial sums; in particular, it shows that these sums grow at least as fast as . The proof is due to Ivan Niven, adapted from the product expansion idea of
Euler. In the following, a sum or product taken over always represents a sum or product taken over a specified set of primes. The proof rests upon the following four inequalities: • Every positive integer can be uniquely expressed as the product of a square-free integer and a square as a consequence of the
fundamental theorem of arithmetic. Start with i = q_1^{2{\alpha}_1+{\beta}_1} \cdot q_2^{2{\alpha}_2+{\beta}_2} \cdots q_r^{2{\alpha}_r+{\beta}_r}, where the
βs are 0 (the corresponding power of prime is even) or 1 (the corresponding power of prime is odd). Factor out one copy of all the primes whose β is 1, leaving a product of primes to even powers, itself a square. Relabeling: i = (p_1 p_2 \cdots p_s) \cdot b^2, where the first factor, a product of primes to the first power, is square free. Inverting all the s gives the inequality \sum_{i=1}^n \frac 1 i \le \left(\prod_{p \le n} \left(1 + \frac 1 p \right)\right) \cdot \left(\sum_{k=1}^n \frac 1 {k^2}\right) = A \cdot B. To see this, note that \frac 1 i = \frac 1 {p_1 p_2 \cdots p_s} \cdot \frac 1 {b^2}, and \begin{align} \left(1 + \frac{1}{p_1}\right)\left(1 + \frac{1}{p_2}\right) \ldots \left(1 + \frac{1}{p_s}\right) &= \left(\frac{1}{p_1}\right)\left(\frac{1}{p_2}\right)\cdots\left(\frac{1}{p_s}\right) + \ldots\\ &= \frac 1 {p_1 p_2 \cdots p_s} + \ldots. \end{align} That is, 1/(p_1p_2 \cdots p_s) is one of the summands in the expanded product . And since 1 / b^2 is one of the summands of , every summand 1/i is represented in one of the terms of when multiplied out. The inequality follows. • The upper estimate for the
natural logarithm \begin{align} \log(n+1) &= \int_1^{n+1} \frac{dx}x \\ &= \sum_{i=1}^n\underbrace{\int_i^{i+1}\frac{dx}x}_{{} \, • The lower estimate for the
exponential function, which holds for all . • Let . The upper bound (using a
telescoping sum) for the partial sums (convergence is all we really need) \begin{align} \sum_{k=1}^n \frac 1 {k^2} &\, \frac{1}{k^2}} \\ &= 1 + \frac23 - \frac1{n + \frac{1}{2}} Combining all these inequalities, we see that \begin{align} \log(n+1) & Dividing through by and taking the natural logarithm of both sides gives \log\log(n + 1) - \log\frac53 as desired.
Q.E.D. Using \sum_{k=1}^\infty \frac{1}{k^2} = \frac{\pi^2}6 (see the
Basel problem), the above constant can be improved to ; in fact it turns out that \lim_{n \to \infty } \left( \sum_{p \leq n} \frac{1}{p} - \log \log n \right) = M where is the
Meissel–Mertens constant (somewhat analogous to the much more famous
Euler–Mascheroni constant).
Proof from Dusart's inequality From
Dusart's inequality, we get p_n Then \begin{align} \sum_{n=1}^\infty \frac1{ p_n} &\ge \sum_{n=6}^\infty \frac{1}{ p_n} \\ &\ge \sum_{n=6}^\infty \frac{1}{ n \log n + n \log \log n} \\ &\ge \sum_{n=6}^\infty \frac{1}{2n \log n} = \infty \end{align} by the
integral test for convergence. This shows that the series on the left diverges.
Geometric and harmonic-series proof The following proof is modified from
James A. Clarkson. Define the
k-th tail x_{k} = \sum_{n = k+1} ^{\infty} \frac{1}{p_n}. Then for i \geq 0, the expansion of (x_{k})^{i} contains at least one term for each reciprocal of a positive integer with exactly i prime factors (counting multiplicities) only from the set \{ p_{k+1}, p_{k+2}, \cdots \}. It follows that the geometric series \sum_{i = 0} ^{\infty} (x_{k})^{i} contains at least one term for each reciprocal of a positive integer not divisible by any p_{n},n\leq k. But since 1+j(p_{1}p_{2}\cdots p_{k}) always satisfies this criterion, \sum_{i=0}^{\infty}(x_{k})^{i}>\sum_{j=1}^{\infty} \frac{1}{1+j(p_{1}p_{2} \cdots p_{k})}>\frac{1}{1+p_{1}p_{2} \cdots p_{k}} \sum_{j=1}^{\infty}\frac{1}{j}=\infty by the divergence of the
harmonic series. This shows that x_{k}\geq 1 for all k, and since the tails of a convergent series must themselves converge to zero, this proves divergence. ==Partial sums==