The term originated in
astronomy, when it was discovered that numerous observers making simultaneous observations would record slightly different values (for example, in recording the exact time at which a
star crossed the wires of a
reticule in a
telescope), some of which were of a significant enough difference to afford for problems in larger calculations. The existence of the effect was first discovered when, in 1796, the
Astronomer Royal Neville Maskelyne dismissed his assistant Kinnebrooke because he could not better the error of his observations relative to Maskelyne's own values. The problem was forgotten and only analysed two decades later by
Friedrich Wilhelm Bessel at
Königsberg Observatory in
Prussia. Setting up an experiment to compare the values, Bessel and an assistant measured the times at which several stars crossed the wires of a reticule in different nights. Compared to his assistant, Bessel found himself to be ahead by more than a second. In response to this realization, astronomers became increasingly suspicious of the results of other astronomers and their own assistants and began systematic programs to attempt to find ways to remove or lessen the effects. These included attempts at the automation of observations (appealing to the presumed
objectivity of machines), training observers to try to avoid certain known errors (such as those caused by lack of
sleep), developing machines that could allow multiple observers to make observations at the same time, the taking of redundant data and using techniques such as the
method of least squares to derive possible values from them, and trying to quantify the biases of individual workers so that they could be subtracted from the data. It became a major topic in
experimental psychology as well, and was a major motivation for developing methods to deal with
error in astronomy. ==James and Jung==