A
quadrature rule is an approximation of the
definite integral of a
function, usually stated as a
weighted sum of function values at specified points within the domain of integration. Numerical integration methods can generally be described as combining evaluations of the integrand to get an approximation to the integral. The integrand is evaluated at a finite set of points called
integration points and a weighted sum of these values is used to approximate the integral. The integration points and weights depend on the specific method used and the accuracy required from the approximation. An important part of the analysis of any numerical integration method is to study the behavior of the approximation error as a function of the number of integrand evaluations. A method that yields a small error for a small number of evaluations is usually considered superior. Reducing the number of evaluations of the integrand reduces the number of arithmetic operations involved, and therefore reduces the total error. Also, each evaluation takes time, and the integrand may be arbitrarily complicated.
Quadrature rules based on step functions A "brute force" kind of numerical integration can be done, if the integrand is reasonably well-behaved (i.e.
piecewise continuous and of
bounded variation), by evaluating the integrand with very small increments. This simplest method approximates the function by a
step function (a piecewise constant function, or a segmented polynomial of degree zero) that passes through the point \left( \frac{a+b}{2}, f \left( \frac{a+b}{2} \right)\right) . This is called the
midpoint rule or
rectangle rule \int_a^b f(x)\, dx \approx (b-a) f\left(\frac{a+b}{2}\right).
Quadrature rules based on interpolating functions A large class of quadrature rules can be derived by constructing
interpolating functions that are easy to integrate. Typically these interpolating functions are
polynomials. In practice, since polynomials of very high degree tend to
oscillate wildly, only polynomials of low degree are used, typically linear and quadratic. The interpolating function may be a straight line (an
affine function, i.e. a polynomial of degree 1) passing through the points \left( a, f(a)\right) and \left( b, f(b)\right) . This is called the
trapezoidal rule \int_a^b f(x)\, dx \approx (b-a) \left(\frac{f(a) + f(b)}{2}\right). For either one of these rules, we can make a more accurate approximation by breaking up the interval [a,b] into some number n of subintervals, computing an approximation for each subinterval, then adding up all the results. This is called a
composite rule,
extended rule, or
iterated rule. For example, the composite trapezoidal rule can be stated as \int_a^b f(x)\, dx \approx \frac{b-a}{n} \left( {f(a) \over 2} + \sum_{k=1}^{n-1} \left( f \left( a + k \frac{b-a}{n} \right) \right) + {f(b) \over 2} \right), where the subintervals have the form [a+k h,a+ (k+1)h] \subset [a,b], with h = \frac{b - a}{n} and k = 0,\ldots,n-1. Here we used subintervals of the same length h but one could also use intervals of varying length \left( h_k \right)_k . Interpolation with polynomials evaluated at equally spaced points in [a,b] yields the
Newton–Cotes formulas, of which the rectangle rule and the trapezoidal rule are examples.
Simpson's rule, which is based on a polynomial of order 2, is also a Newton–Cotes formula. Quadrature rules with equally spaced points have the very convenient property of
nesting. The corresponding rule with each interval subdivided includes all the current points, so those integrand values can be re-used. If we allow the intervals between interpolation points to vary, we find another group of quadrature formulas, such as the
Gaussian quadrature formulas. A Gaussian quadrature rule is typically more accurate than a Newton–Cotes rule that uses the same number of function evaluations, if the integrand is
smooth (i.e., if it is sufficiently differentiable). Other quadrature methods with varying intervals include
Clenshaw–Curtis quadrature (also called Fejér quadrature) methods, which do nest. Gaussian quadrature rules do not nest, but the related
Gauss–Kronrod quadrature formulas do.
Adaptive algorithms Extrapolation methods The accuracy of a quadrature rule of the
Newton–Cotes type is generally a function of the number of evaluation points. The result is usually more accurate as the number of evaluation points increases, or, equivalently, as the width of the step size between the points decreases. It is natural to ask what the result would be if the step size were allowed to approach zero. This can be answered by extrapolating the result from two or more nonzero step sizes, using
series acceleration methods such as
Richardson extrapolation. The extrapolation function may be a
polynomial or
rational function. Extrapolation methods are described in more detail by Stoer and Bulirsch (Section 3.4) and are implemented in many of the routines in the
QUADPACK library.
Conservative (a priori) error estimation Let f have a bounded first derivative over [a,b], i.e. f \in C^1([a,b]). The
mean value theorem for f, where x \in [a,b), gives (x - a) f'(\xi_x) = f(x) - f(a), for some \xi_x \in (a,x] depending on x . If we integrate in x from a to b on both sides and take the absolute values, we obtain \left| \int_a^b f(x)\, dx - (b - a) f(a) \right| = \left| \int_a^b (x - a) f'(\xi_x)\, dx \right| . We can further approximate the integral on the right-hand side by bringing the absolute value into the integrand, and replacing the term in f' by an upper bound {{NumBlk|:| \left| \int_a^b f(x)\, dx - (b - a) f(a) \right| \leq {(b - a)^2 \over 2} \sup_{a \leq x \leq b} \left| f'(x) \right| , }} where the
supremum was used to approximate. Hence, if we approximate the integral \int_a^b f(x) \, dx by the
quadrature rule (b - a) f(a) our error is no greater than the right hand side of . We can convert this into an error analysis for the
Riemann sum, giving an upper bound of \frac{n^{-1}}{2} \sup_{0 \leq x \leq 1} \left| f'(x) \right| for the error term of that particular approximation. (Note that this is precisely the error we calculated for the example f(x) = x.) Using more derivatives, and by tweaking the quadrature, we can do a similar error analysis using a
Taylor series (using a partial sum with remainder term) for
f. This error analysis gives a strict upper bound on the error, if the derivatives of
f are available. This integration method can be combined with
interval arithmetic to produce
computer proofs and
verified calculations.
Integrals over infinite intervals Several methods exist for approximate integration over unbounded intervals. The standard technique involves specially derived quadrature rules, such as
Gauss-Hermite quadrature for integrals on the whole real line and
Gauss-Laguerre quadrature for integrals on the positive reals. Monte Carlo methods can also be used, or a change of variables to a finite interval; e.g., for the whole line one could use \int_{-\infty}^{\infty} f(x) \, dx = \int_{-1}^{+1} f\left( \frac{t}{1-t^2} \right) \frac{1+t^2}{\left(1-t^2\right)^2} \, dt, and for semi-infinite intervals one could use \begin{align} \int_a^{\infty} f(x) \, dx &= \int_0^1 f\left(a + \frac{t}{1-t}\right) \frac{dt}{(1-t)^2}, \\ \int_{-\infty}^a f(x) \, dx &= \int_0^1 f\left(a - \frac{1-t}{t}\right) \frac{dt}{t^2}, \end{align} as possible transformations. Further reading . == Multidimensional integrals ==