An important consideration in practice when the function is calculated using
floating-point arithmetic of finite precision is the choice of step size, . To illustrate, consider the two-point approximation formula with error term: f'(x) = \frac{f(x+h) - f(x-h)}{2h} - \frac{f^{(3)}(c)}{6}h^2 where c is some point between x - h and x + h. Let e(x) denote the
roundoff error encountered when evaluating the function f(x) and \hat{f}(x) denote the computed value of f(x). Therefore, f(x) = \hat{f}(x) + e(x). The total error in the approximation is f'(x) - \frac{\hat{f}(x + h) - \hat{f}(x-h)}{2h} = \underbrace{\frac{e(x + h) - e(x-h)}{2h}}_{\text{Roundoff error}} - \underbrace{\frac{f^{(3)}(c)}{6}h^2}_{\text{Truncation error}}. Assuming that the roundoff errors are bounded by some number \varepsilon > 0 and the
third derivative of f(x) is bounded by some number M > 0, we get \left| f'(x) - \frac{\hat{f}(x + h) - \hat{f}(x-h)}{2h} \right| \le \frac{\varepsilon}{h} + \frac{h^2}{6}M. To reduce the
truncation error \frac{h^2}{6}M, we must reduce . But as is reduced, the roundoff error \frac{\varepsilon}{h} increases. Due to the need to divide by the small value , all the finite-difference formulae for numerical differentiation are similarly
ill-conditioned. If instead the derivative is approximated using f'(x) = \frac{f(x+h) - f(x)}{h} and we assume the relative
approximation error due to rounding is bounded by
machine epsilon and the values |f''(x)| and |f(x)| are bounded on [x_0,x_0+h] by M_1and M_2 respectively it can be shown that:\left| f'(x_0) - \frac{\hat{f}(x_0 + h) - \hat{f}(x_0)}{h} \right| \le \frac{h}{2}M_1 + \frac{2\varepsilon}{h}M_2By minimizing this upper bound an estimate for the optimal step size can be obtained. However, employing this method would require the knowledge of the bounds M_1 and M_2. Instead, if an approximate upper bound is used - one where M_1and M_2 are replaced by |f
(x_0)| and |f(x_0)| the error can be estimated as:e(h) = \frac{h}{2}|f(x_0)| + \frac{2\varepsilon}{h}|f(x_0)|By minimizing e(h) an estimate for the optimal step size '''' can be found: In practice, one cannot compute the value that minimizes the above error bounds and therefore cannot compute an approximation of the optimal step size without information about higher-order derivatives.
Example To demonstrate this difficulty, consider approximating the derivative of the function f(x) = \frac{2x}{1 + \sqrt{x}} at the point x_{0} = 9. In this case, we can calculate f'(x) = \frac{2 + \sqrt{x}}{(1 + \sqrt{x})^{2}} which gives f'(9) = \frac{2 + \sqrt{9}}{(1 + \sqrt{9})^{2}} = \frac{2 + 3}{(1 + 3)^{2}} = \frac{5}{16} = 0.3125. Using 64-bit floating point numbers, the following approximations are generated with the two-point approximation formula and increasingly smaller step sizes. The smallest absolute error is produced for a step size of 10^{-4}, after which the absolute error steadily increases as the roundoff errors dominate calcuations. For computer calculations the problems are exacerbated because, although necessarily holds a
representable floating-point number in some precision (32 or 64-bit,
etc.), almost certainly will not be exactly representable in that precision. This means that will be changed (by rounding or truncation) to a nearby machine-representable number, with the consequence that will
not equal ; the two function evaluations will not be exactly apart. In this regard, since most decimal fractions are recurring sequences in binary (just as 1/3 is in decimal) a seemingly round step such as will not be a
round number in binary; it is 0.000110011001100...2 A possible approach is as follows: h := sqrt(eps) * x; xph := x + h; dx := xph - x; slope := (F(xph) - F(x)) / dx; However, with computers,
compiler optimization facilities may fail to attend to the details of actual computer arithmetic and instead apply the axioms of
mathematics to deduce that and are the same. With
C and similar languages, a directive that is a
volatile variable will prevent this. ==Three Point methods==