The IEEE standard does not define the terms
machine epsilon and
unit roundoff, so differing definitions of these terms are in use, which can cause some confusion. The two terms differ by simply a factor of two. The more-widely used term (referred to as the
mainstream definition in this article), is used in most modern programming languages and is simply defined as
machine epsilon is the difference between 1 and the next larger floating point number. The
formal definition can generally be considered to yield an epsilon half the size of the
mainstream definition, although its definition does vary depending on the form of rounding used. The two terms are described at length in the next two subsections.
Formal definition (Rounding machine epsilon) The
formal definition for
machine epsilon is the one used by Prof.
James Demmel in lecture scripts, the
LAPACK linear algebra package, numerics research papers and some scientific computing software. Most numerical analysts use the words
machine epsilon and
unit roundoff interchangeably with this meaning, which is explored in depth throughout this subsection.
Rounding is a procedure for choosing the representation of a
real number in a
floating point number system. For a
number system and a rounding procedure, machine epsilon is the maximum
relative error of the chosen rounding procedure. Some background is needed to determine a value from this definition. A floating point number system is characterized by a
radix which is also called the base, b, and by the
precision p, i.e. the number of radix b digits of the
significand (including any leading implicit bit). All the numbers with the same
exponent, e, have the spacing, b^{e-(p-1)}. The spacing changes at the numbers that are perfect powers of b; the spacing on the side of larger
magnitude is b times larger than the spacing on the side of smaller magnitude. Since machine epsilon is a bound for relative error, it suffices to consider numbers with exponent e=0. It also suffices to consider positive numbers. For the usual round-to-nearest kind of rounding, the absolute rounding error is at most half the spacing, or b^{-(p-1)}/2. This value is the biggest possible numerator for the relative error. The
denominator in the relative error is the number being rounded, which should be as small as possible to make the relative error large. The worst relative error therefore happens when rounding is applied to numbers of the form 1+a where a is between 0 and b^{-(p-1)}/2. All these numbers round to 1 with relative error a/(1+a). The maximum occurs when a is at the upper end of its range. The 1+a in the denominator is negligible compared to the numerator, so it is left off for expediency, and just b^{-(p-1)}/2 is taken as machine epsilon. As has been shown here, the relative error is worst for numbers that round to 1, so machine epsilon also is called
unit roundoff meaning roughly "the maximum error that can occur when rounding to the unit value". Thus, the maximum spacing between a normalised floating point number, x, and an adjacent normalised number is 2 \varepsilon |x|.
Arithmetic model Numerical analysis uses machine epsilon to study the effects of rounding error. The actual errors of machine arithmetic are far too complicated to be studied directly, so instead, the following simple model is used. The IEEE arithmetic standard says all floating-point operations are done as if it were possible to perform the infinite-precision operation, and then, the result is rounded to a floating-point number. Suppose (1) x, y are floating-point numbers, (2) \bullet is an arithmetic operation on floating-point numbers such as addition or multiplication, and (3) \circ is the infinite precision operation. According to the standard, the computer calculates: :x \bullet y = \mbox {round} (x \circ y) By the meaning of machine epsilon, the relative error of the rounding is at most machine epsilon in magnitude, so: :x \bullet y = (x \circ y)(1 + z) where z in absolute magnitude is at most \varepsilon or
u. The books by Demmel and Higham in the references can be consulted to see how this model is used to analyze the errors of, say,
Gaussian elimination.
Mainstream definition (Interval machine epsilon) This alternative definition is significantly more widespread:
machine epsilon is the difference between 1 and the next larger floating point number. This definition is used in language constants in
Ada,
C,
C++,
Fortran,
MATLAB,
Mathematica,
Octave,
Pascal,
Python and
Rust etc., and defined in textbooks like «
Numerical Recipes» by Press
et al. By this definition,
ε equals the value of the
unit in the last place relative to 1, i.e. b^{-(p-1)} (where is the base of the floating point system and is the precision) and the unit roundoff is
u =
ε / 2, assuming
round-to-nearest mode, and
u =
ε, assuming
round-by-chop. The prevalence of this definition is rooted in its use in the ISO C Standard for constants relating to floating-point types and corresponding constants in other programming languages. It is also widely used in scientific computing software and in the numerics and computing literature. ==How to determine machine epsilon==