The AM–GM inequality can be proven in many ways.
Proof using Jensen's inequality Jensen's inequality states that the value of a
concave function of an arithmetic mean is greater than or equal to the arithmetic mean of the function's values. Since the
logarithm function is concave, we have :\log \left(\frac { \sum x_i}{n} \right) \geq \frac{1}{n} \sum \log x_i = \frac{1}{n} \log \left( \prod x_i\right) = \log \left( \left( \prod x_i \right) ^{1/n} \right). Taking
antilogs (the exponential) of the far left and far right sides, we have the AM–GM inequality.
Proof by successive replacement of elements We have to show that :\alpha = \frac{x_1+x_2+\cdots+x_n}{n} \ge \sqrt[n]{x_1x_2 \cdots x_n}=\beta with equality only when all numbers are equal. If not all numbers are equal, then there exist x_i,x_j such that x_i. Replacing by \alpha and by (x_i+x_j-\alpha) will leave the arithmetic mean of the numbers unchanged, but will increase the geometric mean because :\alpha(x_j+x_i-\alpha)-x_ix_j=(\alpha-x_i)(x_j-\alpha)>0 If the numbers are still not equal, we continue replacing numbers as above. After at most (n-1) such replacement steps all the numbers will have been replaced with \alpha while the geometric mean strictly increases at each step. After the last step, the geometric mean will be \sqrt[n]{\alpha\alpha \cdots \alpha}=\alpha, proving the inequality. It may be noted that the replacement strategy works just as well from the right hand side. If any of the numbers is 0 then so will the geometric mean thus proving the inequality trivially. Therefore we may suppose that all the numbers are positive. If they are not all equal, then there exist x_i,x_j such that 0. Replacing x_i by \beta and x_j by \frac{x_ix_j}{\beta}leaves the geometric mean unchanged but strictly decreases the arithmetic mean since :x_i + x_j - \beta - \frac{x_i x_j}\beta = \frac{(\beta - x_i)(x_j - \beta)}\beta > 0. The proof then follows along similar lines as in the earlier replacement.
Induction proofs Proof by induction #1 Of the non-negative real numbers , the AM–GM statement is equivalent to :\alpha^n\ge x_1 x_2 \cdots x_n with equality if and only if for all {{math|
i ∈ {1, . . . ,
n}}}. For the following proof we apply
mathematical induction and only well-known rules of arithmetic.
Induction basis: For the statement is true with equality.
Induction hypothesis: Suppose that the AM–GM statement holds for all choices of non-negative real numbers.
Induction step: Consider non-negative real numbers , . Their arithmetic mean satisfies : (n+1)\alpha=\ x_1 + \cdots + x_n + x_{n+1}. If all the are equal to , then we have equality in the AM–GM statement and we are done. In the case where some are not equal to , there must exist one number that is greater than the arithmetic mean , and one that is smaller than .
Without loss of generality, we can reorder our in order to place these two particular elements at the end: and . Then :x_n - \alpha > 0\qquad \alpha-x_{n+1}>0 :\implies (x_n-\alpha)(\alpha-x_{n+1})>0\,.\qquad(*) Now define with :y:=x_n+x_{n+1}-\alpha\ge x_n-\alpha>0\,, and consider the numbers which are all non-negative. Since :(n+1)\alpha=x_1 + \cdots + x_{n-1} + x_n + x_{n+1} :n\alpha=x_1 + \cdots + x_{n-1} + \underbrace{x_n+x_{n+1}-\alpha}_{=\,y}, Thus, is also the arithmetic mean of numbers and the induction hypothesis implies :\alpha^{n+1}=\alpha^n\cdot\alpha\ge x_1x_2 \cdots x_{n-1} y\cdot\alpha.\qquad(**) Due to (*) we know that :(\underbrace{x_n+x_{n+1}-\alpha}_{=\,y})\alpha-x_nx_{n+1}=(x_n-\alpha)(\alpha-x_{n+1})>0, hence :y\alpha>x_nx_{n+1}\,,\qquad({*}{*}{*}) in particular . Therefore, if at least one of the numbers is zero, then we already have strict inequality in (**). Otherwise the right-hand side of (**) is positive and strict inequality is obtained by using the estimate (***) to get a lower bound of the right-hand side of (**). Thus, in both cases we can substitute (***) into (**) to get :\alpha^{n+1}>x_1x_2 \cdots x_{n-1} x_nx_{n+1}\,, which completes the proof.
Proof by induction #2 First of all we shall prove that for real numbers and there follows : x_1 + x_2 > x_1x_2+1. Indeed, multiplying both sides of the inequality by , gives : x_2 - x_1x_2 > 1 - x_1, whence the required inequality is obtained immediately. Now, we are going to prove that for positive real numbers satisfying , there holds :x_1 + \cdots + x_n \ge n. The equality holds only if .
Induction basis: For the statement is true because of the above property.
Induction hypothesis: Suppose that the statement is true for all
natural numbers up to .
Induction step: Consider natural number , i.e. for positive real numbers , there holds . There exists at least one , so there must be at least one . Without loss of generality, we let and . Further, the equality we shall write in the form of . Then, the induction hypothesis implies :(x_1 + \cdots + x_{n-2}) + (x_{n-1} x_n ) > n - 1. However, taking into account the induction basis, we have :\begin{align} x_1 + \cdots + x_{n-2} + x_{n-1} + x_n & = (x_1 + \cdots + x_{n-2}) + (x_{n-1} + x_n ) \\ &> (x_1 + \cdots + x_{n-2}) + x_{n-1} x_n + 1 \\ & > n, \end{align} which completes the proof. For positive real numbers , let's denote :x_1 = \frac{a_1}{\sqrt[n]{a_1\cdots a_n}}, . . ., x_n = \frac{a_n}{\sqrt[n]{a_1\cdots a_n}}. The numbers satisfy the condition . So we have :\frac{a_1}{\sqrt[n]{a_1\cdots a_n}} + \cdots + \frac{a_n}{\sqrt[n]{a_1\cdots a_n}} \ge n, whence we obtain :\frac{a_1 + \cdots + a_n}n \ge \sqrt[n]{a_1\cdots a_n}, with the equality holding only for .
Proof by Cauchy using forward–backward induction The following proof by cases relies directly on well-known rules of arithmetic but employs the rarely used technique of forward-backward-induction. It is essentially from
Augustin Louis Cauchy and can be found in his ''
Cours d'analyse''.
The case where all the terms are equal If all the terms are equal: :x_1 = x_2 = \cdots = x_n, then their sum is , so their arithmetic mean is ; and their product is , so their geometric mean is ; therefore, the arithmetic mean and geometric mean are equal, as desired.
The case where not all the terms are equal It remains to show that if
not all the terms are equal, then the arithmetic mean is greater than the geometric mean. Clearly, this is only possible when . This case is significantly more complex, and we divide it into subcases. ===== The subcase where
n = 2 ===== If , then we have two terms, and , and since (by our assumption) not all terms are equal, we have: :\begin{align} \Bigl(\frac{x_1+x_2}{2}\Bigr)^2-x_1x_2 &=\frac14(x_1^2+2x_1x_2+x_2^2)-x_1x_2\\ &=\frac14(x_1^2-2x_1x_2+x_2^2)\\ &=\Bigl(\frac{x_1-x_2}{2}\Bigr)^2>0, \end{align} hence : \frac{x_1 + x_2}{2} > \sqrt{x_1 x_2} as desired. ===== The subcase where
n = 2
k ===== Consider the case where , where is a positive
integer. We proceed by mathematical induction. In the base case, , so . We have already shown that the inequality holds when , so we are done. Now, suppose that for a given , we have already shown that the inequality holds for , and we wish to show that it holds for . To do so, we apply the inequality twice for numbers and once for numbers to obtain: : \begin{align} \frac{x_1 + x_2 + \cdots + x_{2^k}}{2^k} & {} =\frac{\frac{x_1 + x_2 + \cdots + x_{2^{k-1}}}{2^{k-1}} + \frac{x_{2^{k-1} + 1} + x_{2^{k-1} + 2} + \cdots + x_{2^k}}{2^{k-1}}}{2} \\[7pt] & \ge \frac{\sqrt[2^{k-1}]{x_1 x_2 \cdots x_{2^{k-1}}} + \sqrt[2^{k-1}]{x_{2^{k-1} + 1} x_{2^{k-1} + 2} \cdots x_{2^k}}}{2} \\[7pt] & \ge \sqrt{\sqrt[2^{k-1}]{x_1 x_2 \cdots x_{2^{k-1}}} \sqrt[2^{k-1}]{x_{2^{k-1} + 1} x_{2^{k-1} + 2} \cdots x_{2^k}}} \\[7pt] & = \sqrt[2^k]{x_1 x_2 \cdots x_{2^k}} \end{align} where in the first inequality, the two sides are equal only if :x_1 = x_2 = \cdots = x_{2^{k-1}} and :x_{2^{k-1}+1} = x_{2^{k-1}+2} = \cdots = x_{2^k} (in which case the first arithmetic mean and first geometric mean are both equal to , and similarly with the second arithmetic mean and second geometric mean); and in the second inequality, the two sides are only equal if the two geometric means are equal. Since not all numbers are equal, it is not possible for both inequalities to be equalities, so we know that: :\frac{x_1 + x_2 + \cdots + x_{2^k}}{2^k} > \sqrt[2^k]{x_1 x_2 \cdots x_{2^k}} as desired.
The subcase where n k If is not a natural power of , then it is certainly
less than some natural power of 2, since the sequence is unbounded above. Therefore, without loss of generality, let be some natural power of that is greater than . So, if we have terms, then let us denote their arithmetic mean by , and expand our list of terms thus: :x_{n+1} = x_{n+2} = \cdots = x_m = \alpha. We then have: : \begin{align} \alpha & = \frac{x_1 + x_2 + \cdots + x_n}{n} \\[6pt] & = \frac{\frac{m}{n} \left( x_1 + x_2 + \cdots + x_n \right)}{m} \\[6pt] & = \frac{x_1 + x_2 + \cdots + x_n + \frac{(m-n)}{n} \left( x_1 + x_2 + \cdots + x_n \right)}{m}) (\because x_1 + x_2 + \cdots + x_n = \frac{{n} (x_1 + x_2 + \cdots + x_n)}) \\[3pt] & = \frac{x_1 + x_2 + \cdots + x_n + \left( m-n \right) \alpha}{m} \\[6pt] & = \frac{x_1 + x_2 + \cdots + x_n + x_{n+1} + \cdots + x_m}{m} \\[6pt] & \ge \sqrt[m]{x_1 x_2 \cdots x_n x_{n+1} \cdots x_m} \\[6pt] & = \sqrt[m]{x_1 x_2 \cdots x_n \alpha^{m-n}}\,, \end{align} so :\alpha^m \ge x_1 x_2 \cdots x_n \alpha^{m-n} and :\alpha \ge \sqrt[n]{x_1 x_2 \cdots x_n} as desired.
Proof by induction using basic calculus The following proof uses mathematical induction and some basic
differential calculus.
Induction basis: For the statement is true with equality.
Induction hypothesis: Suppose that the AM–GM statement holds for all choices of non-negative real numbers.
Induction step: In order to prove the statement for non-negative real numbers , we need to prove that :\frac{x_1 + \cdots + x_n + x_{n+1}}{n+1} - ({x_1 \cdots x_n x_{n+1}})^{\frac{1}{n+1}}\ge0 with equality only if all the numbers are equal. If all numbers are zero, the inequality holds with equality. If some but not all numbers are zero, we have strict inequality. Therefore, we may assume in the following, that all numbers are positive. We consider the last number as a variable and define the function : f(t)=\frac{x_1 + \cdots + x_n + t}{n+1} - ({x_1 \cdots x_n t})^{\frac{1}{n+1}},\qquad t>0. Proving the induction step is equivalent to showing that for all , with only if and are all equal. This can be done by analyzing the
critical points of using some basic calculus. The first
derivative of is given by :f'(t)=\frac{1}{n+1}-\frac{1}{n+1}({x_1 \cdots x_n})^{\frac{1}{n+1}}t^{-\frac{n}{n+1}},\qquad t>0. A critical point has to satisfy , which means :({x_1 \cdots x_n})^{\frac{1}{n+1}}t_0^{-\frac{n}{n+1}}=1. After a small rearrangement we get :t_0^{\frac{n}{n+1}}=({x_1 \cdots x_n})^{\frac{1}{n+1}}, and finally :t_0=({x_1 \cdots x_n})^{\frac{1}n}, which is the geometric mean of . This is the only critical point of . Since for all , the function is
strictly convex and has a strict
global minimum at . Next we compute the value of the function at this global minimum: : \begin{align} f(t_0) &= \frac{x_1 + \cdots + x_n + ({x_1 \cdots x_n})^{1/n}}{n+1} - ({x_1 \cdots x_n})^{\frac{1}{n+1}}({x_1 \cdots x_n})^{\frac{1}{n(n+1)}}\\ &= \frac{x_1 + \cdots + x_n}{n+1} + \frac{1}{n+1}({x_1 \cdots x_n})^{\frac{1}n} - ({x_1 \cdots x_n})^{\frac{1}n}\\ &= \frac{x_1 + \cdots + x_n}{n+1} - \frac{n}{n+1}({x_1 \cdots x_n})^{\frac{1}n}\\ &= \frac{n}{n+1}\Bigl(\frac{x_1 + \cdots + x_n}n - ({x_1 \cdots x_n})^{\frac{1}n}\Bigr) \\ &\ge0, \end{align} where the final inequality holds due to the induction hypothesis. The hypothesis also says that we can have equality only when are all equal. In this case, their geometric mean has the same value, Hence, unless are all equal, we have . This completes the proof. This technique can be used in the same manner to prove the generalized AM–GM inequality and
Cauchy–Schwarz inequality in
Euclidean space .
Proof by Pólya using the exponential function George Pólya provided a proof similar to what follows. Let for all real , with first
derivative and second derivative . Observe that , and for all real , hence is strictly convex with the absolute minimum at . Hence for all real with equality only for . Consider a list of non-negative real numbers . If they are all zero, then the AM–GM inequality holds with equality. Hence we may assume in the following for their arithmetic mean . By -fold application of the above inequality, we obtain that :\begin{align}{ \frac{x_1}{\alpha} \frac{x_2}{\alpha} \cdots \frac{x_n}{\alpha} } &\le { e^{\frac{x_1}{\alpha} - 1} e^{\frac{x_2}{\alpha} - 1} \cdots e^{\frac{x_n}{\alpha} - 1} }\\ & = \exp \Bigl( \frac{x_1}{\alpha} - 1 + \frac{x_2}{\alpha} - 1 + \cdots + \frac{x_n}{\alpha} - 1 \Bigr), \qquad (*) \end{align} with equality if and only if for every {{math|
i ∈ {1, . . . ,
n}}}. The argument of the exponential function can be simplified: :\begin{align} \frac{x_1}{\alpha} - 1 + \frac{x_2}{\alpha} - 1 + \cdots + \frac{x_n}{\alpha} - 1 & = \frac{x_1 + x_2 + \cdots + x_n}{\alpha} - n \\ & = \frac{n \alpha}{\alpha} - n \\ & = 0. \end{align} Returning to , :\frac{x_1 x_2 \cdots x_n}{\alpha^n} \le e^0 = 1, which produces , hence the result :\sqrt[n]{x_1 x_2 \cdots x_n} \le \alpha.
Proof by Lagrangian multipliers If any of the x_i are 0, then there is nothing to prove. So we may assume all the x_i are strictly positive. Because the arithmetic and geometric means are homogeneous of degree 1, without loss of generality assume that \prod_{i=1}^n x_i = 1. Set G(x_1,x_2,\ldots,x_n)=\prod_{i=1}^n x_i, and F(x_1,x_2,\ldots,x_n) = \frac{1}{n}\sum_{i=1}^n x_i. The inequality will be proved (together with the equality case) if we can show that the minimum of F(x_1,x_2,...,x_n), subject to the constraint G(x_1,x_2,\ldots,x_n) = 1, is equal to 1, and the minimum is only achieved when x_1 = x_2 = \cdots = x_n = 1. Let us first show that the constrained minimization problem has a global minimum. Set K = \{(x_1,x_2,\ldots,x_n) \colon 0 \leq x_1,x_2,\ldots,x_n \leq n\}. Since the intersection K \cap \{G = 1\} is compact, the
extreme value theorem guarantees that the minimum of F(x_1,x_2,...,x_n) subject to the constraints G(x_1,x_2,\ldots,x_n) = 1 and (x_1,x_2,\ldots,x_n) \in K is attained at some point inside K. On the other hand, observe that if any of the x_i > n, then F(x_1,x_2,\ldots,x_n) > 1 , while F(1,1,\ldots,1) = 1, and (1,1,\ldots,1) \in K \cap \{G = 1\} . This means that the minimum inside K \cap \{G = 1\} is in fact a global minimum, since the value of F at any point inside K \cap \{G = 1\} is certainly no smaller than the minimum, and the value of F at any point (y_1,y_2,\ldots, y_n) not inside K is strictly bigger than the value at (1,1,\ldots,1), which is no smaller than the minimum. The method of
Lagrange multipliers says that the global minimum is attained at a point (x_1,x_2,\ldots,x_n) where the gradient of F(x_1,x_2,\ldots,x_n) is \lambda times the gradient of G(x_1,x_2,\ldots,x_n), for some \lambda. We will show that the only point at which this happens is when x_1 = x_2 = \cdots = x_n = 1 and F(x_1,x_2,...,x_n) = 1. Compute \frac{\partial F}{\partial x_i} = \frac{1}{n} and : \frac{\partial G}{\partial x_i} = \prod_{j \neq i}x_j = \frac{G(x_1,x_2,\ldots,x_n)}{x_i} = \frac{1}{x_i} along the constraint. Setting the gradients proportional to one another therefore gives for each i that \frac{1}{n} = \frac{\lambda}{x_i}, and so n\lambda= x_i. Since the left-hand side does not depend on i, it follows that x_1 = x_2 = \cdots = x_n, and since G(x_1,x_2,\ldots, x_n) = 1, it follows that x_1 = x_2 = \cdots = x_n = 1 and F(x_1,x_2,\ldots,x_n) = 1, as desired. ==Generalizations==