Lagrange interpolation We may write down the polynomial immediately in terms of
Lagrange polynomials as: \begin{align} p(x) &= \frac{(x-x_1)(x-x_2)\cdots(x-x_n)}{(x_0-x_1)(x_0-x_2)\cdots(x_0-x_n)} y_0 \\ [4pt] &+ \frac{(x-x_0)(x-x_2)\cdots(x-x_n)}{(x_1-x_0)(x_1-x_2) \cdots(x_1-x_n)}y_1 \\ [4pt] &+ \cdots\\ [4pt] &+\frac{(x-x_0)(x-x_1)\cdots(x-x_{n-1})}{(x_n-x_0)(x_n-x_1)\cdots(x_n-x_{n-1})}y_n \\ [7pt] &=\sum_{i=0}^n \Biggl( \prod_{\stackrel{\!0\,\leq\, j\,\leq\, n}{j\,\neq\, i}} \frac{x-x_j}{x_i-x_j} \Biggr) y_i =\sum_{i=0}^n \frac{p(x)}{p'(x_i)(x-x_i)}\,y_i \end{align}For matrix arguments, this formula is called
Sylvester's formula and the matrix-valued Lagrange polynomials are the
Frobenius covariants. :
Newton interpolation Theorem For a polynomial p_n of degree less than or equal to n, that interpolates f at the nodes x_i where i = 0,1,2,3,\cdots,n. Let p_{n+1} be the polynomial of degree less than or equal to n+1 that interpolates f at the nodes x_i where i = 0,1,2,3,\cdots,n, n+1. Then p_{n+1} is given by:p_{n+1}(x) = p_n(x) +a_{n+1}w_n(x) where w_n(x) := \prod_{i=0}^n (x-x_i) also known as Newton basis and a_{n+1} :={f(x_{n+1})-p_n(x_{n+1}) \over w_n(x_{n+1})} .
Proof: This can be shown for the case where i = 0,1,2,3,\cdots,n:p_{n+1}(x_i) = p_n(x_i) +a_{n+1}\prod_{j=0}^n (x_i-x_j) = p_n(x_i) and when i = n+1:p_{n+1}(x_{n+1}) = p_n(x_{n+1}) +{f(x_{n+1})-p_n(x_{n+1}) \over w_n(x_{n+1})} w_n(x_{n+1}) = f(x_{n+1}) By the uniqueness of interpolated polynomials of degree less than n+1, p_{n+1}(x) = p_n(x) +a_{n+1}w_n(x) is the required polynomial interpolation. The function can thus be expressed as: p_{n}(x) = a_0+a_1(x-x_0)+a_2(x-x_0)(x-x_1)+\cdots + a_n(x-x_0)\cdots(x-x_{n-1}) .
Polynomial coefficients To find a_i, we have to solve the
lower triangular matrix formed by arranging p_{n} (x_i)=f(x_i)=y_i from above equation in matrix form: : \begin{bmatrix} 1 & & \ldots & & 0 \\ 1 & x_1-x_0 & & & \\ 1 & x_2-x_0 & (x_2-x_0)(x_2-x_1) & & \vdots \\ \vdots & \vdots & & \ddots & \\ 1 & x_n-x_0 & \ldots & \ldots & \prod_{j=0}^{n-1}(x_n - x_j) \end{bmatrix} \begin{bmatrix} a_0 \\ \\ \vdots \\ \\ \\ a_{n} \end{bmatrix} = \begin{bmatrix} y_0 \\ \\ \vdots \\ \\ \\ y_{n} \end{bmatrix} The coefficients are derived as : a_j := [y_0,\ldots,y_j] where : [y_0,\ldots,y_j] is the notation for
divided differences. Thus,
Newton polynomials are used to provide a polynomial interpolation formula of n points.[y_j, y_{j+1}, \ldots , y_{j+n}] = \frac{1}{n!h^n}\Delta^{(n)}y_j,Taking y_i=f(x_i), if the representation of x in the previous sections was instead taken to be x=x_j+sh, the
Newton forward interpolation formula is expressed as:f(x) \approx N(x)=N(x_j+sh) = \sum_{i=0}^{k}{s \choose i}\Delta^{(i)} f(x_j) which is the interpolation of all points after x_j. It is expanded as:f(x_j+sh)=f(x_j)+\frac{s}{1!}\Delta f(x_j)+ \frac{s(s-1)}{2!}\Delta^2 f(x_j)+\frac{s(s-1)(s-2)}{3!}\Delta^3 f(x_j)+\frac{s(s-1)(s-2)(s-3)}{4!}\Delta^4 f(x_j)+\cdots
Newton backward formula If the nodes are reordered as {x}_{k},{x}_{k-1},\dots,{x}_{0}, the Newton polynomial becomes : N(x)=[y_k]+[{y}_{k}, {y}_{k-1}](x-{x}_{k})+\cdots+[{y}_{k},\ldots,{y}_{0}](x-{x}_{k})(x-{x}_{k-1})\cdots(x-{x}_{1}). If {x}_{k},\;{x}_{k-1},\;\dots,\;{x}_{0} are equally spaced with {x}_{i}={x}_{k}-(k-i)h for
i = 0, 1, ...,
k and {x}={x}_{k}+sh, then, : \begin{align} N(x) &= [{y}_{k}]+ [{y}_{k}, {y}_{k-1}]sh+\cdots+[{y}_{k},\ldots,{y}_{0}]s(s+1)\cdots(s+k-1){h}^{k} \\ &=\sum_{i=0}^{k}{(-1)}^{i}{-s \choose i}i!{h}^{i}[{y}_{k},\ldots,{y}_{k-i}]. \end{align} Since the relationship between divided differences and backward differences is given as:[{y}_{j}, y_{j-1},\ldots,{y}_{j-n}] = \frac{1}{n!h^n}\nabla^{(n)}y_j, taking y_i=f(x_i), if the representation of x in the previous sections was instead taken to be x=x_j+sh, the
Newton backward interpolation formula is expressed as:f(x) \approx N(x) =N(x_j+sh)=\sum_{i=0}^{k}{(-1)}^{i}{-s \choose i}\nabla^{(i)} f(x_j). which is the interpolation of all points before x_j. It is expanded as:f(x_j+sh)=f(x_j)+\frac{s}{1!}\nabla f(x_j)+ \frac{s(s+1)}{2!}\nabla^2 f(x_j)+\frac{s(s+1)(s+2)}{3!}\nabla^3 f(x_j)+\frac{s(s+1)(s+2)(s+3)}{4!}\nabla^4 f(x_j)+\cdots
Lozenge diagram A Lozenge diagram is a diagram that is used to describe different interpolation formulas that can be constructed for a given data set. A line starting on the left edge and tracing across the diagram to the right can be used to represent an interpolation formula if the following rules are followed: • Left to right steps indicate addition whereas right to left steps indicate subtraction • If the slope of a step is positive, the term to be used is the product of the difference and the factor immediately below it. If the slope of a step is negative, the term to be used is the product of the difference and the factor immediately above it. • If a step is horizontal and passes through a factor, use the product of the factor and the average of the two terms immediately above and below it. If a step is horizontal and passes through a difference, use the product of the difference and the average of the two terms immediately above and below it. The factors are expressed using the formula:C(u+k,n)=\frac{(u+k)(u+k-1)\cdots(u+k-n+1)}{n!}
Proof of equivalence If a path goes from \Delta^{n-1}y_s to \Delta^{n+1}y_{s-1} , it can connect through three intermediate steps, (a) through \Delta^{n}y_{s-1} , (b) through C(u-s ,n) or (c) through \Delta^{n}y_s . Proving the equivalence of these three two-step paths should prove that all (n-step) paths can be morphed with the same starting and ending, all of which represents the same formula. Path (a): C(u-s, n) \Delta^n y_{s-1}+C(u-s+1, n+1) \Delta^{n+1} y_{s-1} Path (b): C(u-s, n) \Delta^n y_s + C(u-s, n+1) \Delta^{n+1} y_{s-1} Path (c): C(u-s, n) \frac{\Delta^n y_{s-1}+\Delta^n y_{s}}{2} \quad+\frac{C(u-s+1, n+1)+C(u-s, n+1)}{2} \Delta^{n+1} y_{s-1} Subtracting contributions from path a and b: \begin{aligned} \text{Path a - Path b}= & C(u-s, n)(\Delta^n y_{s-1}-\Delta^n y_s) +(C(u-s+1, n+1)-C(u-s, n-1)) \Delta^{n+1} y_{s-1} \\ = & - C(u-s, n)\Delta^{n+1} y_{s-1} + C(u-s, n) \frac{(u-s+1)-(u-s-n)}{n+1} \Delta^{n+1} y_{s-1} \\ = & C(u-s, n)(-\Delta^{n+1} y_{s-1}+\Delta^{n+1} y_{s-1} )=0 \\ \end{aligned} Thus, the contribution of either path (a) or path (b) is the same. Since path (c) is the average of path (a) and (b), it also contributes identical function to the polynomial. Hence the equivalence of paths with same starting and ending points is shown. To check if the paths can be shifted to different values in the leftmost corner, taking only two step paths is sufficient: (a) y_{s+1} to y_{s} through \Delta y_{s} or (b) factor between y_{s+1} and y_{s} , to y_{s} through \Delta y_{s} or (c) starting from y_{s} . Path (a) y_{s+1}+C(u-s-1,1) \Delta y_s - C(u-s, 1) \Delta y_s Path (b) \frac{y_{s+1}+y_s}{2}+\frac{C(u-s-1,1)+C(u-s, 1)}{2} \Delta y_s - C(u-s, 1) \Delta y_s Path (c) y_{s} Since \Delta y_{s} = y_{s+1}-y_s , substituting in the above equations shows that all the above terms reduce to y_{s} and are hence equivalent. Hence these paths can be morphed to start from the leftmost corner and end in a common point. causing large errors when computing the coefficients if the system of equations is solved using
Gaussian elimination. Several authors have therefore proposed algorithms which exploit the structure of the Vandermonde matrix to compute numerically stable solutions in O(
n2) operations instead of the O(
n3) required by Gaussian elimination. These methods rely on constructing first a
Newton interpolation of the polynomial and then converting it to a
monomial form.
Non-Vandermonde algorithms To find the interpolation polynomial
p(
x) in the vector space
P(
n) of polynomials of degree , we may use the usual
monomial basis for
P(
n) and invert the Vandermonde matrix by Gaussian elimination, giving a
computational cost of O(
n3) operations. To improve this algorithm, a more convenient basis for
P(
n) can simplify the calculation of the coefficients, which must then be translated back in terms of the
monomial basis. One method is to write the interpolation polynomial in the
Newton form (i.e. using Newton basis) and use the method of
divided differences to construct the coefficients, e.g.
Neville's algorithm. The cost is
O(n2) operations. Furthermore, you only need to do O(
n) extra work if an extra point is added to the data set, while for the other methods, you have to redo the whole computation. Another method is preferred when the aim is not to compute the
coefficients of
p(
x), but only a
single value p(
a) at a point
x = a not in the original data set. The
Lagrange form computes the value
p(
a) with complexity O(
n2). The
Bernstein form was used in a
constructive proof of the
Weierstrass approximation theorem by
Bernstein and has gained great importance in computer graphics in the form of
Bézier curves. == Interpolations as linear combinations of values ==