As early as 1671,
Isaac Newton implicitly used Puiseux series and proved the following theorem for approximating with
series the
roots of
algebraic equations whose coefficients are functions that are themselves approximated with series or
polynomials. For this purpose, he introduced the
Newton polygon, which remains a fundamental tool in this context. Newton worked with truncated series, and it is only in 1850 that
Victor Puiseux The theorem asserts that, given an algebraic equation whose coefficients are polynomials or, more generally, Puiseux series over a
field of
characteristic zero, every solution of the equation can be expressed as a Puiseux series. Moreover, the proof provides an algorithm for computing these Puiseux series, and, when working over the
complex numbers, the resulting series are convergent. In modern terminology, the theorem can be restated as:
the field of Puiseux series over an algebraically closed field of characteristic zero, and the field of convergent Puiseux series over the complex numbers, are both algebraically closed.
Newton polygon Let :P(y)=\sum_{a_i\neq 0} a_i(x) y^i be a polynomial whose nonzero coefficients a_i(x) are polynomials, power series, or even Puiseux series in . In this section, the valuation v(a_i) of a_i is the lowest exponent of in a_i. (Most of what follows applies more generally to coefficients in any
valued ring.) For computing the Puiseux series that are
roots of (that is solutions of the
functional equation P(y)=0), the first thing to do is to compute the valuation of the roots. This is the role of the Newton polygon. Let us consider, in a
Cartesian plane, the points of coordinates (i, v(a_i)). The
Newton polygon of is the lower
convex hull of these points. That is, the edges of the Newton polygon are the
line segments joining two of these points, such that all these points are not below the line supporting the segment (below is, as usually, relative to the value of the second coordinate). Given a Puiseux series y_0 of valuation v_0, the valuation of P(y_0) is at least the minimum of the numbers i v_0 + v(a_i), and is equal to this minimum if this minimum is reached for only one . So, for y_0 being a root of , the minimum must be reached at least twice. That is, there must be two values i_1 and i_2 of such that i_1 v_0 + v(a_{i_1}) = i_2 v_0 + v(a_{i_2}), and i v_0 + v(a_{i}) \ge i_1 v_0 + v(a_{i_1}) for every . That is, (i_1, v(a_{i_1})) and (i_2, v(a_{i_2})) must belong to an edge of the Newton polygon, and v_0=-\frac{v(a_{i_1})-v(a_{i_2})}{i_1-i_2} must be the opposite of the slope of this edge. This is a rational number as soon as all valuations v(a_i) are rational numbers, and this is the reason for introducing rational exponents in Puiseux series. In summary,
the valuation of a root of must be the opposite of a slope of an edge of the Newton polynomial. The initial coefficient of a Puiseux series solution of P(y)=0 can easily be deduced. Let c_i be the initial coefficient of a_i(x), that is, the coefficient of x^{v(a_i)} in a_i(x). Let -v_0 be a slope of the Newton polygon, and \gamma x_0^{v_0} be the initial term of a corresponding Puiseux series solution of P(y)=0. If no cancellation would occur, then the initial coefficient of P(y) would be \sum_{i\in I}c_i \gamma^i, where is the set of the indices such that (i, v(a_i)) belongs to the edge of slope v_0 of the Newton polygon. So, for having a root, the initial coefficient \gamma must be a nonzero root of the polynomial \chi(x)=\sum_{i\in I}c_i x^i (this notation will be used in the next section). In summary, the Newton polynomial allows an easy computation of all possible initial terms of Puiseux series that are solutions of P(y)=0. The proof of Newton–Puiseux theorem will consist of starting from these initial terms for computing recursively the next terms of the Puiseux series solutions.
Constructive proof Let suppose that the first term \gamma x^{v_0} of a Puiseux series solution of P(y)=0 has been be computed by the method of the preceding section. It remains to compute z=y-\gamma x^{v_0}. For this, we set y_0=\gamma x^{v_0}, and write the
Taylor expansion of at z=y-y_0: :Q(z)=P(y_0+z)=P(y_0)+zP'(y_0)+\cdots + z^j\frac {P^{(j)}(y_0)} {j!} +\cdots This is a polynomial in whose coefficients are Puiseux series in . One may apply to it the method of the Newton polygon, and iterate for getting the terms of the Puiseux series, one after the other. But some care is required for insuring that v(z)>v_0, and showing that one get a Puiseux series, that is, that the denominators of the exponents of remain bounded. The derivation with respect to does not change the valuation in of the coefficients; that is, :v\left(P^{(j)}(y_0)z^j\right)\ge \min_i (v(a_i) + i v_0)+j(v(z)-v_0), and the equality occurs if and only if \chi^{(j)}(\gamma)\neq 0, where \chi(x) is the polynomial of the preceding section. If is the multiplicity of \gamma as a root of \chi, it results that the inequality is an equality for j=m. The terms such that j>m can be forgotten as far as one is concerned by valuations, as v(z)>v_0 and j>m imply :v\left(P^{(j)}(y_0)z^j\right) \ge \min_i (v(a_i) +iv_0)+j(v(z)-v_0) > v\left(P^{(m)}(y_0)z^m\right). This means that, for iterating the method of Newton polygon, one can and one must consider only the part of the Newton polygon whose first coordinates belongs to the interval [0, m]. Two cases have to be considered separately and will be the subject of next subsections, the so-called
ramified case, where , and the
regular case where .
Ramified case The way of applying recursively the method of the Newton polygon has been described precedingly. As each application of the method may increase, in the ramified case, the denominators of exponents (valuations), it remains to prove that one reaches the regular case after a finite number of iterations (otherwise the denominators of the exponents of the resulting series would not be bounded, and this series would not be a Puiseux series. By the way, it will also be proved that one gets exactly as many Puiseux series solutions as expected, that is the degree of P(y) in . Without loss of generality, one can suppose that P(0)\neq 0, that is, a_0\neq 0. Indeed, each factor of P(y) provides a solution that is the zero Puiseux series, and such factors can be factored out. As the characteristic is supposed to be zero, one can also suppose that P(y) is a
square-free polynomial, that is that the solutions of P(y)=0 are all different. Indeed, the
square-free factorization uses only the operations of the field of coefficients for factoring P(y) into square-free factors than can be solved separately. (The hypothesis of characteristic zero is needed, since, in characteristic , the square-free decomposition can provide irreducible factors, such as y^p-x, that have multiple roots over an algebraic extension.) In this context, one defines the
length of an edge of a Newton polygon as the difference of the
abscissas of its end points. The length of a polygon is the sum of the lengths of its edges. With the hypothesis P(0)\neq 0, the length of the Newton polygon of is its degree in , that is the number of its roots. The length of an edge of the Newton polygon is the number of roots of a given valuation. This number equals the degree of the previously defined polynomial \chi(x). The ramified case corresponds thus to two (or more) solutions that have the same initial term(s). As these solutions must be distinct (square-free hypothesis), they must be distinguished after a finite number of iterations. That is, one gets eventually a polynomial \chi(x) that is square free, and the computation can continue as in the regular case for each root of \chi(x). As the iteration of the regular case does not increase the denominators of the exponents, This shows that the method provides all solutions as Puiseux series, that is, that the field of Puiseux series over the complex numbers is an algebraically closed field that contains the univariate polynomial ring with complex coefficients.
Failure in positive characteristic The Newton–Puiseux theorem is not valid over fields of positive characteristic. For example, the equation X^2 - X = T^{-1} has solutions :X = T^{-1/2} + \frac{1}{2} + \frac{1}{8}T^{1/2} - \frac{1}{128}T^{3/2} + \cdots and :X = -T^{-1/2} + \frac{1}{2} - \frac{1}{8}T^{1/2} + \frac{1}{128}T^{3/2} + \cdots (one readily checks on the first few terms that the sum and product of these two series are 1 and -T^{-1} respectively; this is valid whenever the base field
K has characteristic different from 2). As the powers of 2 in the denominators of the coefficients of the previous example might lead one to believe, the statement of the theorem is not true in positive characteristic. The example of the
Artin–Schreier equation X^p - X = T^{-1} shows this: reasoning with valuations shows that
X should have valuation -\frac{1}{p}, and if we rewrite it as X = T^{-1/p} + X_1 then :X^p = T^{-1} + {X_1}^p,\text{ so }{X_1}^p - X_1 = T^{-1/p} and one shows similarly that X_1 should have valuation -\frac{1}{p^2}, and proceeding in that way one obtains the series :T^{-1/p} + T^{-1/p^2} + T^{-1/p^3} + \cdots; since this series makes no sense as a Puiseux series—because the exponents have unbounded denominators—the original equation has no solution. However, such
Eisenstein equations are essentially the only ones not to have a solution, because, if K is algebraically closed of characteristic p>0, then the field of Puiseux series over K is the perfect closure of the maximal tamely
ramified extension of K(\!(T)\!). (This implies the former theorem since any algebraically closed field of characteristic zero is the unique quadratic extension of some real-closed field.) There is also an analogous result for
p-adic closure: if K is a p-adically closed field with respect to a valuation w, then the field of Puiseux series over K is also p-adically closed. == Puiseux expansion of algebraic curves and functions ==