In industrial instrumentation, accuracy is the measurement tolerance, or transmission of the instrument and defines the limits of the errors made when the instrument is used in normal operating conditions. Ideally a measurement device is both accurate and precise, with measurements all close to and tightly clustered around the true value. The accuracy and precision of a measurement process is usually established by repeatedly measuring some
traceable reference
standard. Such standards are defined in the
International System of Units (abbreviated SI from French: ''Système international d'unités'') and maintained by national
standards organizations such as the
National Institute of Standards and Technology in the United States. This also applies when measurements are repeated and averaged. In that case, the term
standard error is properly applied: the precision of the average is equal to the known standard deviation of the process divided by the square root of the number of measurements averaged. Further, the
central limit theorem shows that the
probability distribution of the averaged measurements will be closer to a normal distribution than that of individual measurements. With regard to accuracy we can distinguish: • the difference between the
mean of the measurements and the reference value, the
bias. Establishing and correcting for bias is necessary for
calibration. • the combined effect of that and precision. A common convention in science and engineering is to express accuracy and/or precision implicitly by means of
significant figures. Where not explicitly stated, the margin of error is understood to be one-half the value of the last significant place. For instance, a recording of 843.6 m, or 843.0 m, or 800.0 m would imply a margin of 0.05 m (the last significant place is the tenths place), while a recording of 843 m would imply a margin of error of 0.5 m (the last significant digits are the units). A reading of 8,000 m, with trailing zeros and no decimal point, is ambiguous; the trailing zeros may or may not be intended as significant figures. To avoid this ambiguity, the number could be represented in scientific notation: 8.0 × 103 m indicates that the first zero is significant (hence a margin of 50 m) while 8.000 × 103 m indicates that all three zeros are significant, giving a margin of 0.5 m. Similarly, one can use a multiple of the basic measurement unit: 8.0 km is equivalent to 8.0 × 103 m. It indicates a margin of 0.05 km (50 m). However, reliance on this convention can lead to
false precision errors when accepting data from sources that do not obey it. For example, a source reporting a number like 153,753 with precision +/- 5,000 looks like it has precision +/- 0.5. Under the convention it would have been rounded to 150,000. Alternatively, in a scientific context, if it is desired to indicate the margin of error with more precision, one can use a notation such as 7.54398(23) × 10−10 m, meaning a range of between 7.54375 and 7.54421 × 10−10 m. Precision includes: •
repeatability — the variation arising when all efforts are made to keep conditions constant by using the same instrument and operator, and repeating during a short time period; and •
reproducibility — the variation arising using the same measurement process among different instruments and operators, and over longer time periods. In engineering, precision is often taken as three times Standard Deviation of measurements taken, representing the range that 99.73% of measurements can occur within. For example, an ergonomist measuring the human body can be confident that 99.73% of their extracted measurements fall within ± 0.7 cm - if using the GRYPHON processing system - or ± 13 cm - if using unprocessed data. ==In classification==