The error of the composite trapezoidal rule is the difference between the value of the integral and the numerical result: \text{E} = \int_a^b f(x)\,dx - \frac{b-a}{N} \left[ {f(a) + f(b) \over 2} + \sum_{k=1}^{N-1} f \left( a+k \frac{b-a}{N} \right) \right] There exists a number
ξ between
a and
b, such that \text{E} = -\frac{(b-a)^3}{12N^2} f''(\xi) It follows that if the integrand is
concave up (and thus has a positive second derivative), then the error is negative and the trapezoidal rule overestimates the true value. This can also be seen from the geometric picture: the trapezoids include all of the area under the curve and extend over it. Similarly, a
concave-down function yields an underestimate because area is unaccounted for under the curve, but none is counted above. If the interval of the integral being approximated includes an
inflection point, the sign of the error is harder to identify. An asymptotic error estimate for
N → ∞ is given by \text{E} = -\frac{(b-a)^2}{12N^2} \big[ f'(b)-f'(a) \big] + O(N^{-3}). Further terms in this error estimate are given by the Euler–Maclaurin summation formula. Several techniques can be used to analyze the error, including: •
Fourier series •
Residue calculus •
Euler–Maclaurin summation formula •
Polynomial interpolation It is argued that the speed of convergence of the trapezoidal rule reflects and can be used as a definition of classes of smoothness of the functions.
Proof First suppose that h=\frac{b-a}{N} and a_k=a+(k-1)h. Let g_k(t) = \frac{1}{2} t[f(a_k)+f(a_k+t)] - \int_{a_k}^{a_k+t} f(x) \, dx be the function such that |g_k(h)| is the error of the trapezoidal rule on one of the intervals, [a_k, a_k+h] . Then {dg_k \over dt}={1 \over 2}[f(a_k)+f(a_k+t)]+{1\over2}t\cdot f'(a_k+t)-f(a_k+t), and {d^2g_k \over dt^2}={1\over 2}t\cdot f''(a_k+t). Now suppose that \left| f
(x) \right| \leq \left| f(\xi) \right|, which holds if f is sufficiently smooth. It then follows that \left| f
(a_k+t) \right| \leq f(\xi) which is equivalent to -f
(\xi) \leq f(a_k+t) \leq f
(\xi), or -\frac{f(\xi)t}{2} \leq g_k
(t) \leq \frac{f(\xi)t}{2}. Since g_k'(0)=0 and g_k(0)=0, \int_0^t g_k''(x) dx = g_k'(t) and \int_0^t g_k'(x) dx = g_k(t). Using these results, we find -\frac{f''(\xi)t^2}{4} \leq g_k'(t) \leq \frac{f''(\xi)t^2}{4} and -\frac{f
(\xi)t^3}{12} \leq g_k(t) \leq \frac{f(\xi)t^3}{12} Letting t = h we find -\frac{f
(\xi)h^3}{12} \leq g_k(h) \leq \frac{f(\xi)h^3}{12}. Summing all of the local error terms we find \sum_{k=1}^{N} g_k(h) = \frac{b-a}{N} \left[ {f(a) + f(b) \over 2} + \sum_{k=1}^{N-1} f \left( a+k \frac{b-a}{N} \right) \right] - \int_a^b f(x)dx. But we also have - \sum_{k=1}^N \frac{f
(\xi)h^3}{12} \leq \sum_{k=1}^N g_k(h) \leq \sum_{k=1}^N \frac{f(\xi)h^3}{12} and \sum_{k=1}^N \frac{f
(\xi)h^3}{12}=\frac{f(\xi)h^3N}{12}, so that -\frac{f
(\xi)h^3N}{12} \leq \frac{b-a}{N} \left[ {f(a) + f(b) \over 2} + \sum_{k=1}^{N-1} f \left( a+k \frac{b-a}{N} \right) \right]-\int_a^bf(x)dx \leq \frac{f(\xi)h^3N}{12}. Therefore the total error is bounded by \text{error} = \int_a^b f(x)\,dx - \frac{b-a}{N} \left[ {f(a) + f(b) \over 2} + \sum_{k=1}^{N-1} f \left( a+k \frac{b-a}{N} \right) \right] = \frac{f
(\xi)h^3N}{12}=\frac{f(\xi)(b-a)^3}{12N^2}.
Periodic and peak functions The trapezoidal rule converges rapidly for periodic functions. This is an easy consequence of the
Euler–Maclaurin summation formula, which says that if f is p times continuously differentiable with period T, then \sum_{k=0}^{N-1} f(kh)h = \int_0^T f(x)\,dx + \sum_{k=1}^{\lfloor p/2\rfloor} \frac{B_{2k}}{(2k)!} \left(f^{(2k - 1)}(T) - f^{(2k - 1)}(0)\right) - (-1)^p h^p \int_0^T\tilde{B}_{p}(x/T)f^{(p)}(x) \, dx, where h := T/N, and \tilde{B}_p is the periodic extension of the p-th Bernoulli polynomial. Due to the periodicity, the derivatives at the endpoint cancel, and we see that the error is O(h^p). A similar effect is available for peak-like functions, such as
Gaussian,
Exponentially modified Gaussian and other functions with derivatives at integration limits that can be neglected. The evaluation of the full integral of a Gaussian function by trapezoidal rule with 1% accuracy can be made using just 4 points.
Simpson's rule requires 1.8 times more points to achieve the same accuracy.
"Rough" functions For functions that are not in
C2, the error bound given above is not applicable. Still, error bounds for such rough functions can be derived, which typically show a slower convergence with the number of function evaluations N than the O(N^{-2}) behaviour given above. Interestingly, in this case the trapezoidal rule often has sharper bounds than
Simpson's rule for the same number of function evaluations. == Applicability and alternatives ==