Earthquake prediction is an evolving scienceit has not yet led to a successful prediction of an earthquake from first physical principles. Research into methods of prediction therefore focus on empirical analysis, with two general approaches: either identifying distinctive
precursors to earthquakes, or identifying some kind of geophysical
trend or pattern in seismicity that might precede a large earthquake. Precursor methods are pursued largely because of their potential utility for short-term earthquake prediction or forecasting, while 'trend' methods are generally thought to be useful for forecasting, long term prediction (10 to 100 years time scale) or intermediate term prediction (1 to 10 years time scale).
Precursors An earthquake precursor is an anomalous phenomenon that might give effective warning of an impending earthquake. Reports of these – though generally recognized as such only after the event – number in the thousands, some dating back to antiquity. There have been around 400 reports of possible precursors in
scientific literature, of roughly twenty different types, running the gamut from
aeronomy to zoology. None have been found to be reliable for the purposes of earthquake prediction. In the early 1990s, the
IASPEI solicited nominations for a Preliminary List of Significant Precursors. Forty nominations were made, of which five were selected as possible significant precursors, with two of those based on a single observation each. After a critical review of the scientific literature, the
International Commission on Earthquake Forecasting for Civil Protection (ICEF) concluded in 2011 there was "considerable room for methodological improvements in this type of research." In particular, many cases of reported precursors are contradictory, lack a measure of amplitude, or are generally unsuitable for a rigorous statistical evaluation. Published results are biased towards positive results, and so the rate of false negatives (earthquake but no precursory signal) is unclear.
Animal behavior After an earthquake has already begun, pressure waves (
P waves) travel twice as fast as the more damaging shear waves (
S waves). Typically not noticed by humans, some animals may notice the smaller vibrations that arrive a few to a few dozen seconds before the main shaking, and become alarmed or exhibit other unusual behavior.
Seismometers can also detect P waves, and the timing difference is exploited by electronic
earthquake warning systems to provide humans with a few seconds to move to a safer location. A review of scientific studies available as of 2018 covering over 130 species found insufficient evidence to show that animals could provide warning of earthquakes hours, days, or weeks in advance. Statistical correlations suggest some reported unusual animal behavior is due to smaller earthquakes (
foreshocks) that sometimes precede a large quake, which if small enough may go unnoticed by people. Foreshocks may also cause groundwater changes or release gases that can be detected by animals. Even the vast majority of
scientific reports in the 2018 review did not include observations showing that animals did
not act unusually when there was
not an earthquake about to happen, meaning the behavior was not established to be predictive.
Dilatancy–diffusion In the 1970s the dilatancy–diffusion hypothesis was highly regarded as providing a physical basis for various phenomena seen as possible earthquake precursors. It was based on "solid and repeatable evidence" from laboratory experiments that highly stressed crystalline rock experienced a change in volume, or
dilatancy, which causes changes in other characteristics, such as seismic velocity and electrical resistivity, and even large-scale uplifts of topography. It was believed this happened in a 'preparatory phase' just prior to the earthquake, and that suitable monitoring could therefore warn of an impending quake. Detection of variations in the relative velocities of the primary and secondary seismic waves – expressed as Vp/Vs – as they passed through a certain zone was the basis for predicting the 1973 Blue Mountain Lake (NY) and 1974 Riverside (CA) quake. Although these predictions were informal and even trivial, their apparent success was seen as confirmation of both dilatancy and the existence of a preparatory process, leading to what were subsequently called "wildly over-optimistic statements" However, many studies questioned these results, and the hypothesis eventually languished. Subsequent study showed it "failed for several reasons, largely associated with the validity of the assumptions on which it was based", including the assumption that laboratory results can be scaled up to the real world. Another factor was the bias of retrospective selection of criteria. Other studies have shown dilatancy to be so negligible that concluded: "The concept of a large-scale 'preparation zone' indicating the likely magnitude of a future event, remains as ethereal as the ether that went undetected in the
Michelson–Morley experiment."
Changes in Vp/Vs Vp is the symbol for the velocity of a seismic "P" (primary or pressure) wave passing through rock, while
Vs is the symbol for the velocity of the "S" (secondary or shear) wave. Small-scale laboratory experiments have shown that the ratio of these two velocities – represented as
Vp/
Vs – changes when rock is near the point of fracturing. In the 1970s it was considered a likely breakthrough when Russian seismologists reported observing such changes (later discounted.) in the region of a subsequent earthquake. This effect, as well as other possible precursors, has been attributed to dilatancy, where rock stressed to near its breaking point expands (dilates) slightly. Study of this phenomenon near
Blue Mountain Lake in
New York State led to a successful albeit informal prediction in 1973, and it was credited for predicting the 1974 Riverside (CA) quake. A
Vp/
Vs anomaly was the basis of a 1976 prediction of a M 5.5 to 6.5 earthquake near Los Angeles, which failed to occur. Other studies relying on quarry blasts (more precise, and repeatable) found no such variations, while an analysis of two earthquakes in California found that the variations reported were more likely caused by other factors, including retrospective selection of data. noted that reports of significant velocity changes have ceased since about 1980.
Radon emissions Most rock contains small amounts of gases that can be isotopically distinguished from the normal atmospheric gases. There are reports of spikes in the concentrations of such gases prior to a major earthquake; this has been attributed to release due to pre-seismic stress or fracturing of the rock. One of these gases is
radon, produced by
radioactive decay of the trace amounts of uranium present in most rock. Radon is potentially useful as an earthquake predictor because it is radioactive and thus easily detected, and its short
half-life (3.8 days) makes radon levels sensitive to short-term fluctuations. A 2009 compilation listed 125 reports of changes in radon emissions prior to 86 earthquakes since 1966. The International Commission on Earthquake Forecasting for Civil Protection (ICEF) however found in its 2011 critical review that the earthquakes with which these changes are supposedly linked were up to a thousand kilometers away, months later, and at all magnitudes. In some cases the anomalies were observed at a distant site, but not at closer sites. The ICEF found "no significant correlation".
Electromagnetic anomalies Observations of electromagnetic disturbances and their attribution to the earthquake failure process go back as far as the
Great Lisbon earthquake of 1755, but practically all such observations prior to the mid-1960s are invalid because the instruments used were sensitive to physical movement. Since then various anomalous electrical, electric-resistive, and magnetic phenomena have been attributed to precursory stress and strain changes that precede earthquakes, raising hopes for finding a reliable earthquake precursor. While a handful of researchers have gained much attention with either theories of how such phenomena might be generated, claims of having observed such phenomena prior to an earthquake, no such phenomena has been shown to be an actual precursor for prediction. A 2011 review by the
International Commission on Earthquake Forecasting for Civil Protection (ICEF) found the "most convincing" electromagnetic precursors to be
ultra low frequency magnetic anomalies, such as the Corralitos event (discussed below) recorded before the 1989 Loma Prieta earthquake. However, it was originally believed that observation was a system malfunction. Study of the closely monitored 2004 Parkfield earthquake by the
United States Geological Survey found no evidence of precursory electromagnetic signals of any type. Studies up to 2012 showed that earthquakes with magnitudes less than 5 did not produce significant transient signals. The ICEF considered the search for useful precursors to have been unsuccessful as of 2011. However, A ten year study published in 2022 of magnetic field changes in California from 2010 to 2020 reported a "...statistical signal (is) of modest size" 24 to 72 hours prior to nineteen earthquakes. The study reported that the signal was sufficient for forecasting, but insufficient for use in earthquake prediction.
VAN seismic electric signals The most touted, and most criticized, claim of an electromagnetic precursor is the
VAN method of physics professors
Panayiotis Varotsos, Kessar Alexopoulos and Konstantine Nomicos (VAN) of the
University of Athens. In a 1981 paper they claimed that by measuring geoelectric voltages – what they called "seismic electric signals" (SES) – they could predict earthquakes. In 1984, they claimed there was a "one-to-one correspondence" between SES and earthquakes – that is, that "
every sizable EQ is preceded by an SES and inversely
every SES is always followed by an EQ the magnitude and the
epicenter of which can be reliably predicted" – the SES appearing between 6 and 115 hours before the earthquake. As proof of their method they claimed a series of successful predictions. Although their report was "saluted by some as a major breakthrough", among seismologists it was greeted by a "wave of generalized skepticism". In 1996, a paper VAN submitted to the journal
Geophysical Research Letters was given an unprecedented public peer-review by a broad group of reviewers, with the paper and reviews published in a special issue; the majority of reviewers found the methods of VAN to be flawed. Additional criticism was raised the same year in a public debate between some of the principals. A primary criticism was that the method is geophysically implausible and scientifically unsound. Additional objections included the demonstrable falsity of the claimed one-to-one relationship of earthquakes and SES, the unlikelihood of a precursory process generating signals stronger than any observed from the actual earthquakes, and the very strong likelihood that the signals were man-made. Further work in Greece has tracked SES-like "anomalous transient electric signals" back to specific human sources, and found that such signals are not excluded by the criteria used by VAN to identify SES. More recent work, by employing modern methods of statistical physics, i.e., detrended fluctuation analysis (DFA), multifractal DFA and wavelet transform revealed that SES are clearly distinguished from signals produced by man made sources. The validity of the VAN method, and therefore the predictive significance of SES, was based primarily on the empirical claim of demonstrated predictive success. Numerous weaknesses have been uncovered in the VAN methodology, and in 2011 the International Commission on Earthquake Forecasting for Civil Protection concluded that the prediction capability claimed by VAN could not be validated. Most seismologists consider VAN to have been "resoundingly debunked". On the other hand, the Section "Earthquake Precursors and Prediction" of "Encyclopedia of Solid Earth Geophysics: part of "Encyclopedia of Earth Sciences Series" (Springer 2011) ends as follows (just before its summary): "it has recently been shown that by analyzing time-series in a newly introduced time domain "natural time", the approach to the critical state can be clearly identified [Sarlis et al. 2008]. This way, they appear to have succeeded in shortening the lead-time of VAN prediction to only a few days [Uyeda and Kamogawa 2008]. This means, seismic data may play an amazing role in short term precursor when combined with SES data". Since 2001, the VAN group has introduced a concept they call "natural time", applied to the analysis of their precursors. Initially it is applied on SES to distinguish them from
noise and relate them to a possible impending earthquake. In case of verification (classification as "SES activity"),
natural time analysis is additionally applied to the general subsequent seismicity of the area associated with the SES activity, in order to improve the time parameter of the prediction. The method treats earthquake onset as a
critical phenomenon. A review of the updated VAN method in 2020 says that it suffers from an abundance of false positives and is therefore not usable as a prediction protocol. VAN group answered by pinpointing misunderstandings in the specific reasoning.
Corralitos anomaly Probably the most celebrated seismo-electromagnetic event ever, and one of the most frequently cited examples of a possible earthquake precursor, is the 1989 Corralitos anomaly. In the month prior to the
1989 Loma Prieta earthquake, measurements of the
Earth's magnetic field at ultra-low frequencies by a
magnetometer in
Corralitos, California, just 7 km from the epicenter of the impending earthquake, started showing anomalous increases in amplitude. Just three hours before the quake, the measurements soared to about thirty times greater than normal, with amplitudes tapering off after the quake. Such amplitudes had not been seen in two years of operation, nor in a similar instrument located 54 km away. To many people such apparent locality in time and space suggested an association with the earthquake. Additional magnetometers were subsequently deployed across northern and southern California, but after ten years and several large earthquakes, similar signals have not been observed. More recent studies have cast doubt on the connection, attributing the Corralitos signals to either unrelated magnetic disturbance or, even more simply, to sensor-system malfunction.
Freund physics In his investigations of crystalline physics, Friedemann Freund found that water molecules embedded in rock can dissociate into ions if the rock is under intense stress. The resulting charge carriers can generate battery currents under certain conditions. Freund suggested that perhaps these currents could be responsible for earthquake precursors such as electromagnetic radiation, earthquake lights and disturbances of the plasma in the ionosphere. The study of such currents and interactions is known as "Freund physics". Most seismologists reject Freund's suggestion that stress-generated signals can be detected and put to use as precursors, for a number of reasons. First, it is believed that stress does not accumulate rapidly before a major earthquake, and thus there is no reason to expect large currents to be rapidly generated. Secondly, seismologists have extensively searched for statistically reliable electrical precursors, using sophisticated instrumentation, and have not identified any such precursors. And thirdly, water in the Earth's crust would cause any generated currents to be absorbed before reaching the surface.
Disturbance of the daily cycle of the ionosphere . The anomaly is indicated in red. The
ionosphere usually develops its lower
D layer during the day, while at night this layer disappears as the
plasma there turns to
gas. During the night, the
F layer of the ionosphere remains formed, in higher altitude than D layer. A
waveguide for low
HF radio frequencies up to 10
MHz is formed during the night (
skywave propagation) as the F layer reflects these waves back to the Earth. The skywave is lost during the day, as the D layer absorbs these waves. Tectonic stresses in the Earth's crust are claimed to cause waves of electric charges that travel to the surface of the Earth and affect the ionosphere.
ULF* recordings of the daily cycle of the ionosphere indicate that the usual cycle could be disturbed a few days before a shallow strong earthquake. When the disturbance occurs, it is observed that either the D layer is lost during the day resulting to ionosphere elevation and skywave formation or the D layer appears at night resulting to lower of the ionosphere and hence absence of skywave. Science centers have developed a network of VLF transmitters and receivers on a global scale that detect changes in skywave. Each receiver is also daisy transmitter for distances of 1000–10,000 kilometers and is operating at different frequencies within the network. The general area under excitation can be determined depending on the density of the network. It was shown on the other hand that global extreme events like magnetic storms or solar flares and local extreme events in the same VLF path like another earthquake or a volcano eruption that occur in near time with the earthquake under evaluation make it difficult or impossible to relate changes in skywave to the earthquake of interest. In 2017, an article in the
Journal of Geophysical Research showed that the relationship between ionospheric anomalies and large seismic events (M≥6.0) occurring globally from 2000 to 2014 was based on the presence of solar weather. When the solar data are removed from the
time series, the correlation is no longer statistically significant. A subsequent article in
Physics of the Earth and Planetary Interiors in 2020 shows that solar weather and ionospheric disturbances are a potential cause to trigger large earthquakes based on this statistical relationship. The proposed mechanism is
electromagnetic induction from the ionosphere to the fault zone. Fault fluids are conductive, and can produce
telluric currents at depth. The resulting change in the local magnetic field in the fault triggers dissolution of minerals and weakens the rock, while also potentially changing the groundwater chemistry and level. After the seismic event, different minerals may be precipitated thus changing groundwater chemistry and level again. This model makes sense of the ionospheric, seismic and groundwater data.
Satellite observation of the expected ground temperature declination One way of detecting the mobility of tectonic stresses is to detect locally elevated
temperatures on the surface of the crust measured by
satellites. During the evaluation process, the background of daily variation and
noise due to atmospheric disturbances and human activities are removed before visualizing the concentration of trends in the wider area of a fault. This method has been experimentally applied since 1995. In a newer approach to explain the phenomenon,
NASA's Friedmann Freund has proposed that the
infrared radiation captured by the satellites is not due to a real increase in the surface temperature of the crust. According to this version the emission is a result of the quantum excitation that occurs at the chemical re-bonding of
positive charge carriers (
holes) which are traveling from the deepest layers to the surface of the crust at a speed of 200 meters per second. The electric charge arises as a result of increasing tectonic stresses as the time of the earthquake approaches. This emission extends superficially up to 500 x 500 square kilometers for very large events and stops almost immediately after the earthquake.
Trends Instead of watching for anomalous phenomena that might be precursory signs of an impending earthquake, other approaches to predicting earthquakes look for trends or patterns that lead to an earthquake. As these trends may be complex and involve many variables, advanced statistical techniques are often needed to understand them, therefore these are sometimes called statistical methods. These approaches also tend to be more probabilistic, and to have larger time periods, and so merge into earthquake forecasting.
Nowcasting Earthquake
nowcasting, suggested in 2016 is the estimate of the current dynamic state of a seismological system, based on
natural time introduced in 2001. It differs from forecasting which aims to estimate the probability of a future event but it is also considered a potential base for forecasting. Nowcasting calculations produce the "earthquake potential score", an estimation of the current level of seismic progress. Typical applications are: great global earthquakes and tsunamis, aftershocks and induced seismicity, induced seismicity at gas fields, seismic risk to global megacities, etc.
Elastic rebound Even the stiffest of rock is not perfectly rigid. Given a large force (such as between two immense tectonic plates moving past each other) the Earth's crust will bend or deform. According to the
elastic rebound theory of , eventually the deformation (strain) becomes great enough that something breaks, usually at an existing fault. Slippage along the break (an earthquake) allows the rock on each side to rebound to a less deformed state. In the process energy is released in various forms, including seismic waves. The cycle of tectonic force being accumulated in elastic deformation and released in a sudden rebound is then repeated. As the displacement from a single earthquake ranges from less than a meter to around 10 meters (for an M 8 quake), the demonstrated existence of large
strike-slip displacements of hundreds of miles shows the existence of a long running earthquake cycle.
Characteristic earthquakes The most studied earthquake faults (such as the
Nankai megathrust, the
Wasatch Fault, and the
San Andreas Fault) appear to have distinct segments. The
characteristic earthquake model postulates that earthquakes are generally constrained within these segments. As the lengths and other properties of the segments are fixed, earthquakes that rupture the entire fault should have similar characteristics. These include the maximum magnitude (which is limited by the length of the rupture), and the amount of accumulated strain needed to rupture the fault segment. Since continuous plate motions cause the strain to accumulate steadily, seismic activity on a given segment should be dominated by earthquakes of similar characteristics that recur at somewhat regular intervals. For a given fault segment, identifying these characteristic earthquakes and timing their recurrence rate (or conversely
return period) should therefore inform us about the next rupture; this is the approach generally used in forecasting seismic hazard.
UCERF3 is a notable example of such a forecast, prepared for the state of California. Return periods are also used for forecasting other rare events, such as cyclones and floods, and assume that future frequency will be similar to observed frequency to date. The idea of characteristic earthquakes was the basis of the
Parkfield prediction: fairly similar earthquakes in 1857, 1881, 1901, 1922, 1934, and 1966 suggested a pattern of breaks every 21.9 years, with a standard deviation of ±3.1 years. Extrapolation from the 1966 event led to a prediction of an earthquake around 1988, or before 1993 at the latest (at the 95% confidence interval). The appeal of such a method is that the prediction is derived entirely from the
trend, which supposedly accounts for the unknown and possibly unknowable earthquake physics and fault parameters. However, in the Parkfield case the predicted earthquake did not occur until 2004, a decade late. This seriously undercuts the claim that earthquakes at Parkfield are quasi-periodic, and suggests the individual events differ sufficiently in other respects to question whether they have distinct characteristics in common. The failure of the
Parkfield prediction has raised doubt as to the validity of the characteristic earthquake model itself. Some studies have questioned the various assumptions, including the key one that earthquakes are constrained within segments, and suggested that the "characteristic earthquakes" may be an artifact of selection bias and the shortness of seismological records (relative to earthquake cycles). Other studies have considered whether other factors need to be considered, such as the age of the fault. Whether earthquake ruptures are more generally constrained within a segment (as is often seen), or break past segment boundaries (also seen), has a direct bearing on the degree of earthquake hazard: earthquakes are larger where multiple segments break, but in relieving more strain they will happen less often.
Seismic gaps At the contact where two tectonic plates slip past each other every section must eventually slip, as (in the long-term) none get left behind. But they do not all slip at the same time; different sections will be at different stages in the cycle of strain (deformation) accumulation and sudden rebound. In the
seismic gap model the "next big quake" should be expected not in the segments where recent seismicity has relieved the strain, but in the intervening gaps where the unrelieved strain is the greatest. This model has an intuitive appeal; it is used in long-term forecasting, and was the basis of a series of circum-Pacific (
Pacific Rim) forecasts in 1979 and 1989–1991. However, some underlying assumptions about seismic gaps are now known to be incorrect. A close examination suggests that "there may be no information in seismic gaps about the time of occurrence or the magnitude of the next large event in the region"; statistical tests of the circum-Pacific forecasts shows that the seismic gap model "did not forecast large earthquakes well". Another study concluded that a long quiet period did not increase earthquake potential.
Seismicity patterns Various heuristically derived algorithms have been developed for predicting earthquakes. Probably the most widely known is the M8 family of algorithms (including the RTP method) developed under the leadership of
Vladimir Keilis-Borok. M8 issues a "Time of Increased Probability" (TIP) alarm for a large earthquake of a specified magnitude upon observing certain patterns of smaller earthquakes. TIPs generally cover large areas (up to a thousand kilometers across) for up to five years. Such large parameters have made M8 controversial, as it is hard to determine whether any hits that happened were skillfully predicted, or only the result of chance. M8 gained considerable attention when the 2003 San Simeon and Hokkaido earthquakes occurred within a TIP. In 1999, Keilis-Borok's group published a claim to have achieved statistically significant intermediate-term results using their M8 and MSc models, as far as world-wide large earthquakes are regarded. However, Geller et al. are skeptical of prediction claims over any period shorter than 30 years. A widely publicized TIP for an M 6.4 quake in Southern California in 2004 was not fulfilled, nor two other lesser known TIPs. A deep study of the RTP method in 2008 found that out of some twenty alarms only two could be considered hits (and one of those had a 60% chance of happening anyway). It concluded that "RTP is not significantly different from a naïve method of guessing based on the historical rates [of] seismicity."
Accelerating moment release (AMR, "moment" being a measurement of seismic energy), also known as time-to-failure analysis, or accelerating seismic moment release (ASMR), is based on observations that foreshock activity prior to a major earthquake not only increased, but increased at an exponential rate. In other words, a plot of the cumulative number of foreshocks gets steeper just before the main shock. Following formulation by into a testable hypothesis, and a number of positive reports, AMR seemed promising despite several problems. Known issues included not being detected for all locations and events, and the difficulty of projecting an accurate occurrence time when the tail end of the curve gets steep. But rigorous testing has shown that apparent AMR trends likely result from how data fitting is done, and failing to account for spatiotemporal clustering of earthquakes. The AMR trends are therefore statistically insignificant. Interest in AMR (as judged by the number of peer-reviewed papers) has fallen off since 2004.
Machine learning Rouet-Leduc et al. (2019) reported having successfully trained a regression
random forest on acoustic time series data capable of identifying a signal emitted from fault zones that forecasts fault failure. Rouet-Leduc et al. (2019) suggested that the identified signal, previously assumed to be statistical noise, reflects the increasing emission of energy before its sudden release during a slip event. Rouet-Leduc et al. (2019) further postulated that their approach could bound fault failure times and lead to the identification of other unknown signals. Due to the rarity of the most catastrophic earthquakes, acquiring representative data remains problematic. In response, Rouet-Leduc et al. (2019) have conjectured that their model would not need to train on data from catastrophic earthquakes, since further research has shown the seismic patterns of interest to be similar in smaller earthquakes. Deep learning has also been applied to earthquake prediction. Although
Bath's law and
Omori's law describe the magnitude of earthquake aftershocks and their time-varying properties, the prediction of the "spatial distribution of aftershocks" remains an open research problem. Using the
Theano and
TensorFlow software libraries, DeVries et al. (2018) trained a
neural network that achieved higher accuracy in the prediction of spatial distributions of earthquake aftershocks than the previously established methodology of Coulomb failure stress change. Notably, DeVries et al. (2018) reported that their model made no "assumptions about receiver plane orientation or geometry" and heavily weighted the change in
shear stress, "sum of the absolute values of the independent components of the stress-change tensor," and the von Mises yield criterion. DeVries et al. (2018) postulated that the reliance of their model on these physical quantities indicated that they might "control earthquake triggering during the most active part of the seismic cycle." For validation testing, DeVries et al. (2018) reserved 10% of positive training earthquake data samples and an equal quantity of randomly chosen negative samples. Arnaud Mignan and Marco Broccardo have similarly analyzed the application of artificial neural networks to earthquake prediction. They found in a review of literature that earthquake prediction research utilizing artificial neural networks has gravitated towards more sophisticated models amidst increased interest in the area. They also found that neural networks utilized in earthquake prediction with notable success rates were matched in performance by simpler models. They further addressed the issues of acquiring appropriate data for training neural networks to predict earthquakes, writing that the "structured, tabulated nature of earthquake catalogues" makes transparent machine learning models more desirable than artificial neural networks.
EMP induced seismicity High energy
electromagnetic pulses can
induce earthquakes within 2–6 days after the emission by EMP generators. It has been proposed that strong EM impacts could control seismicity, as the seismicity dynamics that follow appear to be a lot more regular than usual. == Notable predictions ==