Prior to the development of the
magnitude scale, the only measure of an earthquake's strength or "size" was a subjective assessment of the intensity of shaking observed near the
epicenter of the earthquake, categorized by various
seismic intensity scales such as the
Rossi-Forel scale. ("Size" is used in the sense of the quantity of energy released, not the size of the area affected by shaking, though higher-energy earthquakes do tend to affect a wider area, depending on the local geology.) In 1883
John Milne surmised that the shaking of large earthquakes might generate waves detectable around the globe, and in 1899 E. Von Rehbur Paschvitz observed in Germany seismic waves attributable to an earthquake in Tokyo. In the 1920s Harry Wood and John Anderson developed the Wood–Anderson Seismograph, one of the first practical instruments for recording seismic waves. Wood then built, under the auspices of the
California Institute of Technology and the
Carnegie Institute, a network of seismographs stretching across
Southern California. He also recruited the young and unknown Charles Richter to measure the seismograms and locate the earthquakes generating the seismic waves. In 1931
Kiyoo Wadati showed how he had measured, for several strong earthquakes in Japan, the amplitude of the shaking observed at various distances from the epicenter. He then plotted the logarithm of the amplitude against the distance and found a series of curves that showed a rough correlation with the estimated magnitudes of the earthquakes. Richter resolved some difficulties with this method and then, using data collected by his colleague
Beno Gutenberg, he produced similar curves, confirming that they could be used to compare the relative magnitudes of different earthquakes. To produce a practical method of assigning an absolute measure of magnitude required additional developments. First, to span the wide range of possible values, Richter adopted Gutenberg's suggestion of a
logarithmic scale, where each step represents a tenfold increase of magnitude, similar to the magnitude scale used by astronomers
for star brightness. Second, he wanted a magnitude of zero to be around the limit of human perceptibility. Third, he specified the Wood–Anderson seismograph as the standard instrument for producing seismograms. Magnitude was then defined as "the logarithm of the maximum trace amplitude, expressed in
microns", measured at a distance of . The scale was calibrated by defining a magnitude 0 shock as one that produces (at a distance of ) a maximum amplitude of 1 micron (1 μm, or 0.001 millimeters) on a seismogram recorded by a Wood-Anderson torsion seismometer. Finally, Richter calculated a table of distance corrections, in that for distances less than 200 kilometers the attenuation is strongly affected by the structure and properties of the regional geology. When Richter presented the resulting scale in 1935, he called it (at the suggestion of Harry Wood) simply a "magnitude" scale. "Richter magnitude" appears to have originated when
Perry Byerly told the press that the scale was Richter's and "should be referred to as such." In 1956, Gutenberg and Richter, while still referring to "magnitude scale", labelled it "local magnitude", with the symbol , to distinguish it from two other scales they had developed, the
surface wave magnitude (MS) and
body wave magnitude (MB) scales. The
Richter magnitude of an earthquake is determined from the
logarithm of the
amplitude of waves recorded by seismographs (adjustments are included to compensate for the variation in the distance between the various seismographs and the
epicenter of the earthquake). The original formula is: : M_\mathrm{L} = \log_{10} A - \log_{10} A_\mathrm{0}(\delta) = \log_{10} [A / A_\mathrm{0}(\delta)],\ where is the maximum excursion of the Wood–Anderson seismograph, the empirical function depends only on the
epicentral distance of the station, \delta. In practice, readings from all observing stations are averaged after adjustment with station-specific corrections to obtain the value. == References ==