MarketLagrange polynomial
Company Profile

Lagrange polynomial

In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data.

Definition
Given a set of nodes {{nowrap|\{x_0, x_1, \ldots, x_k\},}} which must all be distinct, for indices , the Lagrange basis for polynomials of degree for those nodes is the set of polynomials \textstyle \{\ell_0(x), \ell_1(x), \ldots, \ell_k(x)\} each of degree which take values if and . Using the Kronecker delta this can be written {{tmath|1=\textstyle \ell_j(x_m) = \delta_{jm} }}. Each basis polynomial can be explicitly described by the product: \begin{aligned} \ell_j(x) &= \frac{(x-x_0)}{(x_j-x_0)} \cdots \frac{(x-x_{j-1})}{(x_j-x_{j - 1})} \frac{(x-x_{j+1})}{(x_j-x_{j+1})} \cdots \frac{(x-x_k)}{(x_j-x_k)} \\[8mu] &= \prod_{\begin{smallmatrix}0\le m\le k\\ m\neq j\end{smallmatrix}} \frac{x-x_m}{x_j-x_m} \vphantom\Bigg|. \end{aligned} Notice that the numerator {{tmath|\textstyle \prod_{m \neq j}(x - x_m)}} has roots at the nodes \textstyle \{x_m\}_{m \neq j} while the denominator {{tmath|\textstyle \prod_{m \neq j}(x_j - x_m)}} scales the resulting polynomial so that . The Lagrange interpolating polynomial for those nodes through the corresponding values \{y_0, y_1, \ldots, y_k\} is the linear combination: L(x) = \sum_{j=0}^{k} y_j \ell_j(x). Each basis polynomial has degree , so the sum has degree , and it interpolates the data because {{tmath|1=\textstyle L(x_m) = \sum_{j=0}^{k} y_j \ell_j(x_m) = \sum_{j=0}^{k} y_j \delta_{mj} = y_m}}. The interpolating polynomial is unique. Proof: assume some polynomial of degree interpolates the data. Then the difference is zero at distinct nodes {{nowrap|\{x_0, x_1, \ldots, x_k\}.}} But the only polynomial of degree with more than roots is the constant zero function, so , or . ==Barycentric form==
Barycentric form
Each Lagrange basis polynomial can be rewritten as the product of three parts, a function common to every basis polynomial, a node-specific constant {{tmath|1=\textstyle w_j = \prod_{m\neq j}(x_j - x_m)^{-1} }} (called the barycentric weight), and a part representing the displacement from to : \ell_j(x) = \ell(x) \dfrac{w_j}{x - x_j} By factoring out from the sum, we can write the Lagrange polynomial in the so-called first barycentric form: L(x) = \ell(x) \sum_{j=0}^k \frac{w_j}{x-x_j}y_j. If the weights have been pre-computed, this requires only operations compared to for evaluating each Lagrange basis polynomial individually. (See Big O notation.) The barycentric interpolation formula can also easily be updated to incorporate a new node {{tmath|\textstyle x_{k+1} }} by dividing each of the , by {{tmath|\textstyle (x_j - x_{k+1})}} and constructing the new {{tmath|\textstyle w_{k+1} }} as above. For any , \sum_{j=0}^k \ell_j(x) = 1 because the constant function g(x) = 1 is the unique polynomial of degree \leq k interpolating the data {{nowrap|\{(x_0, 1), (x_1, 1), \ldots, (x_k, 1) \}.}} We can thus further simplify the barycentric formula by dividing { \begin{aligned} L(x) &= \ell(x) \sum_{j=0}^k \frac{w_j}{x-x_j}y_j \Bigg/ \ell(x) \sum_{j=0}^k \frac{w_j}{x-x_j} \\[10mu] &= \sum_{j=0}^k \frac{w_j}{x-x_j}y_j \Bigg/ \sum_{j=0}^k \frac{w_j}{x-x_j}. \end{aligned} This is called the second form or true form of the barycentric interpolation formula. This second form has advantages in computation cost and accuracy: it avoids evaluation of \ell(x); the work to compute each term in the denominator w_j/(x-x_j) has already been done in computing \bigl(w_j/(x-x_j)\bigr)y_j and so computing the sum in the denominator costs only k addition operations; for evaluation points x which are close to one of the nodes x_j, catastrophic cancelation would ordinarily be a problem for the value (x-x_j), however this quantity appears in both numerator and denominator and the two cancel leaving good relative accuracy in the final result. Using this formula to evaluate L(x) at one of the nodes x_j will result in the indeterminate \infty y_j/\infty; computer implementations must replace such results by L(x_j) = y_j. Each Lagrange basis polynomial can also be written in barycentric form: \ell_j(x) = \frac{w_j}{x-x_j} \Bigg/ \sum_{m=0}^k \frac{w_m}{x-x_m}. ==A perspective from linear algebra==
A perspective from linear algebra
Solving an interpolation problem leads to a problem in linear algebra amounting to inversion of a matrix. Using a standard monomial basis for our interpolation polynomial L(x) = \sum_{j=0}^k x^j m_j, we must invert the Vandermonde matrix (x_i)^j to solve L(x_i) = y_i for the coefficients m_j of L(x). By choosing a better basis, the Lagrange basis, L(x) = \sum_{j=0}^k l_j(x) y_j, we merely get the identity matrix, \delta_{ij}, which is its own inverse: the Lagrange basis automatically inverts the analog of the Vandermonde matrix. This construction is analogous to the Chinese remainder theorem. Instead of checking for remainders of integers modulo prime numbers, we are checking for remainders of polynomials when divided by linears. Furthermore, when the order is large, Fast Fourier transformation can be used to solve for the coefficients of the interpolated polynomial. ==Example==
Example
We wish to interpolate f(x) = x^2 over the domain 1 \leq x \leq 3 at the three nodes {{nobr|\{1,\, 2,\, 3\}:}} \begin{align} x_0 & = 1, & & & y_0 = f(x_0) & = 1, \\[3mu] x_1 & = 2, & & & y_1 = f(x_1) & = 4, \\[3mu] x_2 & = 3, & & & y_2 = f(x_2) & =9. \end{align} The node polynomial \ell is \ell(x) = (x-1)(x-2)(x-3) = x^3 - 6x^2 + 11x - 6. The barycentric weights are \begin{align} w_0 &= (1-2)^{-1}(1-3)^{-1} = \tfrac12, \\[3mu] w_1 &= (2-1)^{-1}(2-3)^{-1} = -1, \\[3mu] w_2 &= (3-1)^{-1}(3-2)^{-1} = \tfrac12. \end{align} The Lagrange basis polynomials are \begin{align} \ell_0(x) &= \frac{x - 2}{1 - 2}\cdot\frac{x - 3}{1 - 3} = \tfrac12x^2 - \tfrac52x + 3, \\[5mu] \ell_1(x) &= \frac{x - 1}{2 - 1}\cdot\frac{x - 3}{2 - 3} = -x^2 + 4x - 3, \\[5mu] \ell_2(x) &= \frac{x - 1}{3 - 1}\cdot\frac{x - 2}{3 - 2} = \tfrac12x^2 - \tfrac32x + 1. \end{align} The Lagrange interpolating polynomial is: \begin{align} L(x) &= y_0 \cdot \ell_0(x) + y_1 \cdot \ell_1(x) + y_2 \cdot \ell_2(x) = x^2. \end{align} In (second) barycentric form, L(x) = \frac {\displaystyle \sum_{j=0}^2 \frac{w_j}{x-x_j}y_j} {\displaystyle \sum_{j=0}^2 \frac{w_j}{x-x_j}} = \frac {\displaystyle \frac{\tfrac12}{x - 1} + \frac{-4}{x - 2} + \frac{\tfrac92}{x - 3}} {\displaystyle \frac{\tfrac12}{x - 1} + \frac{-1}{x - 2} + \frac{\tfrac12}{x - 3}}. ==Notes==
Remainder in Lagrange interpolation formula
When interpolating a given function f by a polynomial of degree at the nodes x_0,\dots, x_k we get the remainder R(x) = f(x) - L(x) which can be expressed as \begin{align} R(x) &= f[x_0,\ldots,x_k,x] \ell(x) \\[1ex] &= \ell(x) \frac{f^{(k+1)}(\xi)}{(k+1)!}, & x_0 where f[x_0,\ldots,x_k,x] is the notation for divided differences. Alternatively, the remainder can be expressed as a contour integral in complex domain as \begin{align} R(x) &= \frac{\ell(x)}{2\pi i} \int_C \frac{f(t)}{(t-x)(t-x_0) \cdots (t-x_k)} dt \\[1ex] &= \frac{\ell(x)}{2\pi i} \int_C \frac{f(t)}{(t-x)\ell(t)} dt. \end{align} The remainder can be bound as |R(x)| \leq \frac{(x_k-x_0)^{k+1}}{(k+1)!}\max_{x_0 \leq \xi \leq x_k} |f^{(k+1)}(\xi)|. Derivation Clearly, R(x) is zero at nodes. To find R(x) at a point define a new function F(x) = R(x)-\tilde{R}(x) = f(x)-L(x)-\tilde{R}(x) and choose \tilde{R}(x) = C\cdot\prod_{i=0}^k(x-x_i) where C is the constant we are required to determine for a given x_p. We choose C so that F(x) has k+2 zeroes (at all nodes and x_p) between x_0 and x_k (including endpoints). Assuming that f(x) is k+1-times differentiable, since L(x) and \tilde{R}(x) are polynomials, and therefore, are infinitely differentiable, F(x) will be k+1-times differentiable. By Rolle's theorem, F^{(1)}(x) has k+1 zeroes, F^{(2)}(x) has k zeroes... F^{(k+1)} has 1 zero, say where Explicitly writing F^{(k+1)}(\xi): F^{(k+1)}(\xi)=f^{(k+1)}(\xi)-L^{(k+1)}(\xi)-\tilde{R}^{(k+1)}(\xi) L^{(k+1)}=0,\tilde{R}^{(k+1)}=C\cdot(k+1)! (Because the highest power of x in \tilde{R}(x) is k+1) 0=f^{(k+1)}(\xi)-C\cdot(k+1)! The equation can be rearranged as C=\frac{f^{(k+1)}(\xi)}{(k+1)!} Since F(x_p) = 0 we have R(x_p)=\tilde{R}(x_p) = \frac{f^{k+1}(\xi)}{(k+1)!} \prod_{i=0}^k(x_p-x_i) ==Derivatives==
Derivatives
The th derivative of a Lagrange interpolating polynomial can be written in terms of the derivatives of the basis polynomials, L^{(d)}(x) := \sum_{j=0}^{k} y_j \ell_j^{(d)}(x). Recall (see above) that each Lagrange basis polynomial is \begin{aligned} \ell_j(x) &= \prod_{\begin{smallmatrix}m = 0\\ m\neq j\end{smallmatrix}}^k \frac{x-x_m}{x_j-x_m}. \end{aligned} The first derivative can be found using the product rule: \begin{align} \ell_j'(x) &= \sum_{\begin{smallmatrix}i=0 \\ i\not=j\end{smallmatrix}}^k \Biggl[ \frac{1}{x_j-x_i}\prod_{\begin{smallmatrix}m=0 \\ m\not = (i , j)\end{smallmatrix}}^k \frac{x-x_m}{x_j-x_m} \Biggr] \\[5mu] &= \ell_j(x)\sum_{\begin{smallmatrix}i=0 \\i\not=j\end{smallmatrix}}^k \frac{1}{x-x_i}. \end{align} The second derivative is \begin{align} \ell_j''(x) &= \sum_{\begin{smallmatrix}i=0 \\ i\ne j\end{smallmatrix}}^{k} \frac{1}{x_j-x_i} \Biggl[ \sum_{\begin{smallmatrix}m=0 \\ m\ne(i,j)\end{smallmatrix}}^{k} \Biggl( \frac{1}{x_j-x_m} \prod_{\begin{smallmatrix}n=0 \\ n\ne(i,j,m)\end{smallmatrix}}^{k} \frac{x-x_n}{x_j-x_n} \Biggr) \Biggr] \\[10mu] &= \ell_j(x) \sum_{0 \leq i The third derivative is \begin{align} \ell_j'''(x) &= \ell_j(x) \sum_{0 \leq i and likewise for higher derivatives. Note that all of these formulas for derivatives are invalid at or near a node. A method of evaluating all orders of derivatives of a Lagrange polynomial efficiently at all points of the domain, including the nodes, is converting the Lagrange polynomial to power basis form and then evaluating the derivatives. ==Finite fields==
Finite fields
The Lagrange polynomial can also be computed in finite fields. This has applications in cryptography, such as in Shamir's Secret Sharing scheme. ==See also==
tickerdossier.comtickerdossier.substack.com