Using h_n = \frac{(b-a)}{2^{n+1}}, the method can be inductively defined by \begin{align} R(0,0) &= h_0 (f(a) + f(b)) \\ R(n,0) &= \tfrac{1}{2} R(n{-}1,\,0) + 2h_n \sum_{k=1}^{2^{n-1}} f(a + (2k-1)h_{n-1}) \\ R(n,m) &= R(n,\,m{-}1) + \tfrac{1}{4^m-1} (R(n,\,m{-}1) - R(n{-}1,\,m{-}1)) \\ &= \frac{1}{4^m-1} ( 4^m R(n,\,m{-}1) - R(n{-}1,\, m{-}1)) \end{align} where n \ge m and In
big O notation, the error for is: O{\left(h_n^{2m+2}\right)}. The zeroth extrapolation, , is equivalent to the
trapezoidal rule with points; the first extrapolation, , is equivalent to
Simpson's rule with points. The second extrapolation, , is equivalent to
Boole's rule with points. The further extrapolations differ from Newton-Cotes formulas. In particular further Romberg extrapolations expand on Boole's rule in very slight ways, modifying weights into ratios similar as in Boole's rule. In contrast, further Newton-Cotes methods produce increasingly differing weights, eventually leading to large positive and negative weights. This is indicative of how large degree interpolating polynomial Newton-Cotes methods fail to converge for many integrals, while Romberg integration is more stable. By labelling our O(h^2) approximations as A_0\big(\frac{h}{2^n}\big) instead of R(n,0), we can perform Richardson extrapolation with the error formula defined below: \int_a^b f(x) \, dx = A_0\bigg(\frac{h}{2^n}\bigg)+a_0\bigg(\frac{h}{2^n}\bigg)^{2} + a_1\bigg(\frac{h}{2^n}\bigg)^{4} + a_2\bigg(\frac{h}{2^n}\bigg)^{6} + \cdots Once we have obtained our O(h^{2(m+1)}) approximations A_m\big(\frac{h}{2^n}\big), we can label them as R(n,m). When function evaluations are expensive, it may be preferable to replace the polynomial interpolation of Richardson with the rational interpolation proposed by . == A geometric example ==