The forward difference can be considered as an
operator, called the
difference operator, which maps the function to . This operator amounts to \Delta_h = \operatorname{T}_h - \operatorname I , where is the
shift operator with step , defined by and is the
identity operator. The finite difference of higher orders can be defined in recursive manner as Another equivalent definition is The difference operator is a
linear operator, as such it satisfies It also satisfies a special
Leibniz rule: : \operatorname\Delta_h\bigl( f(x) g(x) \bigr) = \bigl( \operatorname\Delta_h f(x) \bigr) g(x+h) + f(x) \bigl( \operatorname\Delta_h g(x) \bigr) ~. Similar Leibniz rules hold for the backward and central differences. Formally applying the
Taylor series with respect to , yields the operator equation \operatorname{\Delta}_h = h\operatorname{D} + \frac{1}{2!} h^2\operatorname{D}^2 + \frac{1}{3!} h^3\operatorname{D}^3 + \cdots = e^{h\operatorname{D}} - \operatorname I , where denotes the conventional, continuous derivative operator, mapping to its derivative The expansion is valid when both sides act on
analytic functions, for sufficiently small ; in the special case that the series of derivatives terminates (when the function operated on is a finite
polynomial) the expression is exact, for
all finite stepsizes, Thus and formally inverting the exponential yields h\operatorname D = \ln(1+\Delta_h) = \Delta_h - \tfrac{1}{2} \, \Delta_h^2 + \tfrac{1}{3} \, \Delta_h^3 - \cdots ~. This formula holds in the sense that both operators give the same result when applied to a polynomial. Even for analytic functions, the series on the right is not guaranteed to converge; it may be an
asymptotic series. However, it can be used to obtain more accurate approximations for the derivative. For instance, retaining the first two terms of the series yields the second-order approximation to mentioned at the end of the section ''''. The analogous formulas for the backward and central difference operators are h\operatorname D = -\ln(1-\nabla_h) \quad \text{ and } \quad h\operatorname D = 2 \operatorname{arsinh}\left(\tfrac12 \, \delta_h\right) ~. The calculus of finite differences is related to the
umbral calculus of combinatorics. This remarkably systematic correspondence is due to the identity of the
commutators of the umbral quantities to their continuum analogs ( limits), {{Equation box 1 \left[ \frac{\Delta_h}{h} , x\, \operatorname T^{-1}_h \right] = [ \operatorname D , x ] = I . A large number of formal differential relations of standard calculus involving functions thus
systematically map to umbral finite-difference analogs involving For instance, the umbral analog of a monomial is a generalization of the above falling factorial (
Pochhammer k-symbol), (x)_n = \left( x \operatorname T_h^{-1}\right)^n = x \left( x - h \right)\left( x - 2 h \right) \cdots \bigl( x - \left( n - 1 \right) h \bigr), so that \frac{\Delta_h}{h} (x)_n = n (x)_{n-1} , hence the above Newton interpolation formula (by matching coefficients in the expansion of an arbitrary function in such symbols), and so on. For example, the umbral sine is \sin \left(x \operatorname T_h^{-1}\right) = x -\frac{(x)_3}{3!} + \frac{(x)_5}{5!} - \frac{(x)_7}{7!} + \cdots As in the
continuum limit, the
eigenfunction of also happens to be an exponential, : \frac{\Delta_h}{h}(1+\lambda h)^\frac{x}{h} =\frac{\Delta_h}{h} e^{\ln (1 + \lambda h) \frac{x}{h}}= \lambda e^{\ln (1 + \lambda h) \frac{x}{h}} , and hence
Fourier sums of continuum functions are readily, faithfully mapped to umbral Fourier sums, i.e., involving the same Fourier coefficients multiplying these umbral basis exponentials. This umbral exponential thus amounts to the exponential
generating function of the
Pochhammer symbols. Thus, for instance, the
Dirac delta function maps to its umbral correspondent, the
cardinal sine function \delta (x) \mapsto \frac{\sin \left[ \frac{\pi}{2}\left(1+\frac{x}{h}\right) \right]}{ \pi (x+h) }, and so forth.
Difference equations can often be solved with techniques very similar to those for solving
differential equations. The inverse operator of the forward difference operator, so then the umbral integral, is the
indefinite sum or antidifference operator.
Rules for calculus of finite difference operators Analogous to
rules for finding the derivative, we have: •
Constant rule: If is a
constant, then \Delta c = 0 •
Linearity: If and are
constants, \Delta (af + bg) = a \Delta f + b \Delta g All of the above rules apply equally well to any difference operator as to , including and •
Product rule: \begin{align} \Delta (f g) &= f \,\Delta g + g \Delta f + \Delta f \Delta g \\[4pt] \nabla (f g) &= f \,\nabla g + g \nabla f - \nabla f\nabla g \end{align} •
Quotient rule: \nabla \left( \frac{f}{g} \right) = \left. \left( \det \begin{bmatrix} \nabla f & \nabla g \\ f & g \end{bmatrix} \right) \right/ \left( g \cdot \det {\begin{bmatrix} g & \nabla g \\ 1 & 1 \end{bmatrix}}\right) or \nabla\left( \frac{f}{g} \right)= \frac {g \,\nabla f - f \,\nabla g}{g \cdot (g - \nabla g)} •
Summation rules: \begin{align} \sum_{n=a}^b \Delta f(n) &= f(b+1)-f(a) \\ \sum_{n=a}^{b} \nabla f(n) &= f(b)-f(a-1) \end{align} See references. ==Generalizations==