Taylor's theorem {{multiple image | image1 = Taylorsine.svg | image2 = LogTay.svg | footer = Pictured is an accurate approximation of around the point . The pink curve is a polynomial of degree seven \sin{x} \approx x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!}. The error in this approximation is no more than . For a full cycle centered at the origin (), the error is less than 0.08215. In particular, for , the error is less than 0.000003. In contrast, also shown is a picture of the natural logarithm function and some of its Taylor polynomials around . These approximations converge to the function only in the region . Outside of this region, the higher-degree Taylor polynomials are
worse approximations for the function. | total_width = 500 }} The
error incurred in approximating a function by its degree Taylor polynomial is called the
remainder and is denoted by the function .
Taylor's theorem can be used to obtain a bound on the
size of the remainder. In general, Taylor series need not be
convergent at all. In fact, the set of functions with a convergent Taylor series is a
meager set in the
Fréchet space of
smooth functions. Even if the Taylor series of a function does converge, its limit need not be equal to the value of the function . For example, the function f(x) = \begin{cases} e^{-1/x^2} & \text{if } x \neq 0 \\[3mu] 0 & \text{if } x = 0 \end{cases} is
infinitely differentiable at , and has all derivatives zero there. Consequently, the Taylor series of about is identically zero. However, is not the zero function, so it does not equal its Taylor series around the origin. Thus, is an example of a
non-analytic smooth function. This example shows that there are
infinitely differentiable functions in
real analysis, whose Taylor series are
not equal to even if they converge. By contrast, the
holomorphic functions studied in
complex analysis always possess a convergent Taylor series, and even the Taylor series of a
meromorphic function, which might have singularities, never converges to a value different from the function itself. The complex function , however, does not approach when approaches along the imaginary axis, so it is not
continuous in the complex plane and its Taylor series is undefined at . Every sequence of real or complex numbers can appear more generally as
coefficients in the Taylor series of an infinitely differentiable function defined on the real line, a consequence of
Borel's lemma. As a result, the
radius of convergence of a Taylor series can be zero. There are even infinitely differentiable functions defined on the real line whose Taylor series have a radius of convergence everywhere. A function cannot be written as a Taylor series centred at a
singularity. In these cases, the function can still be expressed as a series expansion by allowing negative powers of the variable . Such a series is known as a
Laurent series, which generalizes the Taylor series.
Radius of convergence and singularities For any power series \sum_{n=0}^\infty c_n (x-a)^n, there is a number R called the
radius of convergence, which can be any non-negative number or +\infty, such that the power series converges absolutely for |x-a| and diverges for |x-a|>R. Thus, when a Taylor series converges, it does so in an open interval centered at a (in the real case), or a disc centered at a (in the complex case). A Taylor series may converge absolutely or conditionally at some, all, or none of the boundary points of the open interval/disc, and the quality of convergence at boundary points is an important question in many asymptotic problems. If a function is analytic at a, then its Taylor series converges
to the function in some open neighborhood of a. In complex analysis, the radius of convergence of a
holomorphic function is the radius of the largest open disc centered at a on which the function remains holomorphic. In many common cases, that means that the radius of convergence is the distance of a to the nearest singularity of the function in the complex plane. This explains the different radii of convergence for certain Taylor series of functions familiar in calculus. The series for e^x, \sin x, and \cos x have infinite radius of convergence because these are
entire functions, having no singularities in the complex plane. By contrast, the Taylor series for \log(1+x) around x=0 has radius of convergence 1, because the nearest singularity of the function is at x=-1. However, the real singularities only provide part of the picture in general. For example, although 1/(1+x^2) is a smooth function for all real x, the radius of convergence of its Taylor series around x=0 is 1, because the nearest
complex singularities are at x=\pm i, which are points of the complex unit circle. Thus, even for real-valued functions, the role of complex singularities is important: a function can be infinitely differentiable on the whole real line, and yet have a Taylor series with only a finite radius of convergence, because the limiting obstruction can come from singularities in the corresponding complex function rather than any failure of smoothness on the real axis. A power series may converge at every point of the boundary of its disc of convergence and still fail to extend holomorphically beyond that disc. For example, if \alpha>0 is not an integer, then the
binomial series (1+x)^\alpha=\sum_{n=0}^\infty \binom{\alpha}{n}x^n has radius of convergence R=1. The series converges everywhere on the closed unit disc (including every boundary point). However, for nonintegral \alpha, the function (1+x)^\alpha does not extend as a single-valued holomorphic function to any neighborhood of x=-1. Thus the obstruction to analytic continuation at the boundary point x=-1 is not a failure of convergence of the power series, nor a
pole or
essential singularity, but the branching of the analytic continuation. In effect, x=-1 is a
branch point of the function. This illustrates that convergence on the closed disc is weaker than holomorphic extendibility beyond the boundary. The radius of convergence should not be confused with the quality of the approximation by low-degree Taylor polynomials. A Taylor polynomial may approximate a function accurately near the center even when the full series has only a small radius of convergence. Conversely, near the boundary of the disc of convergence, the full Taylor series may converge slowly. Outside the radius of convergence, the Taylor series does not represent the function at all.
Series with finite differences One form of the
Gregory–Newton interpolation formula can be written as f(x)=\sum_{k=0}^\infty\frac{\Delta^k [f](a)}{k!} \,(x-a)_k which
interpolates a polynomial f in terms of its
finite differences evaluated at a single point a, and where (x-a)_k is the
falling factorial. For a polynomial, this series terminates and gives the polynomial exactly; more generally, a function admits a Gregory–Newton development under suitable analytic hypotheses, classically formulated by
Niels Erik Nørlund in terms of
holomorphy in a half-plane together with an
exponential type growth condition. One generalization of the Taylor series that does converge to the value of the function itself for any
bounded continuous function on , and this can be done by using the calculus of
finite differences. Specifically, the following theorem, due to
Einar Hille, that for any , \lim_{h\to 0^+}\sum_{n=0}^\infty \frac{t^n}{n!}\frac{\Delta_h^nf(a)}{h^n} = f(a+t). Here is the th finite difference operator with step size . The series is precisely the Taylor series, except that divided differences appear in place of differentiation. When the function is analytic at , the terms in the series converge to the terms of the Taylor series, and in this sense generalizes the usual Taylor series. In general, for any infinite sequence , the following power series identity holds: \sum_{n=0}^\infty\frac{u^n}{n!}\Delta^na_i = e^{-u}\sum_{j=0}^\infty\frac{u^j}{j!}a_{i+j}. So in particular, f(a+t) = \lim_{h\to 0^+} e^{-t/h}\sum_{j=0}^\infty f(a+jh) \frac{(t/h)^j}{j!}. The series on the right is the
expected value of , where is a
Poisson-distributed random variable that takes the value with probability . Hence, f(a+t) = \lim_{h\to 0^+} \int_{-\infty}^\infty f(a+x)dP_{t/h,h}(x). The
law of large numbers implies that the identity holds. == Analytic functions ==