Exciting news! We're transitioning to the Statewide California Earthquake Center. Our new website is under construction, but we'll continue using this website for SCEC business in the meantime. We're also archiving the Southern Center site to preserve its rich history. A new and improved platform is coming soon!

EARTHQUAKES: Models, Statistics, Testable Forecasts

Yan Y. Kagan

Published December 27, 2013, SCEC Contribution #8082

Quantitative prediction is the aim of every science. [T]he ultimate test of every scientific theory worthy of its name, is its ability to predict the behavior of a system governed by the laws of said discipline. Accordingly, the most important issue in earthquake seismology is earthquake prediction. This term, however, has been been the topic of scientific debate for decades. For example, Wood & Gutenberg (1935) write: To have any useful meaning the prediction of an earthquake must indicate accurately, {\it within narrow limits}, the region or district where and the time when it will occur -- and, unless otherwise specified, it must refer to a shock of important size and strength, since small shocks are very frequent in all seismic regions. Because earthquake prediction is complicated by a number of factors, Wood & Gutenberg propose the term {\it earthquake forecast}, as an alternative, where in effect the earthquake occurrence rate is predicted.

Long-term studies, however, indicate that the prediction of individual earthquakes, as suggested in the first definition by Wood & Gutenberg, is impossible (Geller 1997; Geller {\it et al.}\ 1997; Kagan 1997b). Furthermore, as we show in even the notion of individual earthquakes or individual faults cannot be properly defined because of earthquake process fractality. Therefore, below we treat the terms {\it earthquake prediction} and earthquake forecast as synonyms.

Available books on seismology primarily discuss the problems of elastic wave propagation and study the Earth structure. This book will take a different approach, focusing instead on earthquake seismology defined as rigorous quantitative study of the earthquake occurrence. Even though several books on earthquake seismology and some books on earthquake prediction are available, there are no in-depth monographs considering the stochastic modeling of the fractal multidimensional processes and the rigorous statistical analysis of the earthquake occurrence. In this book the results of modeling and statistical analysis are applied to evaluate the short- and long-term occurrence rates of future earthquakes, both regionally and globally, and, most importantly, to test these forecasts according to stringent criteria.

The subject of this book could therefore be roughly defined as ``Statistical Seismology" (Vere-Jones 2009, 2010). There has been significant interest in the problems of statistical seismology recently: since 1998, the International Workshops on Statistical Seismology (Statsei2--Statsei7) have provided researchers with an opportunity to evaluate recent developments in statistical seismology, as well as define future directions of research (see http://www.gein.noa.gr/statsei7/). Problems explored in these meetings include the statistical behavior of earthquake occurrence and patterns, time-dependent earthquake forecasting, and forecast evaluations. In addition, in this book we investigate geometrical properties of the earthquake fault system and interrelations of earthquake focal mechanisms.

Thus, this book is a comprehensive and methodologically rigorous analysis of earthquake occurrence. Earthquake processes are inherently multidimensional: in addition to the origin time, 3-D locations, and the measures of size for each earthquake, the orientation of the rupture surface and its displacement requires for its representation either second-rank symmetric tensors or quaternions. Models based on the theory of stochastic multidimensional point processes were employed here to approximate the earthquake occurrence pattern and evaluate its parameters. The terms ``moment" or ``moment tensor" used in seismology to signify ``the seismic moment" or ``the seismic moment tensor" will throughout this book be distinguished from moments used in statistics.

Adequate mathematical and statistical techniques have only recently become available for analyzing fractal temporal, spatial, and tensor patterns of point process data generally and earthquake data in particular. Furthermore, only in the past twenty to thirty years has the processing power of modern computers and the quality, precision, and completeness of earthquake datasets become sufficient to allow a detailed, full-scale investigation of earthquake occurrence.

Since the early nineteenth century, the Gaussian (normal) distribution was used almost exclusively for the statistical analysis of data. However, the Gaussian distribution is a special, limiting case of a broad class of {\it stable} probability distributions. These distributions, which, with the exception of the Gaussian law, have a power-law (heavy) tail, have recently become an object of intense mathematical investigation. These distributions are now applied in physics, finance, and other disciplines. One can argue that they are more useful in explaining natural phenomena than the Gaussian law. For the stable distributions with the power-law tail exponent $1.0 < \beta < 2.0$ the variance is infinite; if $1.0 \ge \beta$ the mean is infinite. The application of these distributions to the analysis of seismicity and other geophysical phenomena would significantly increase our quantitative understanding of their fractal patterns.

After a careful analysis of systematic and random effects in earthquake registration and interpretation of seismograms, we show that most of these statistical distribution parameters have universal values. These results help explain such classical distributions as Omori's law and the Gutenberg-Richter relation, used in earthquake seismology for many decades. We show that the parameters of these distributions are universal constants defined by simple mathematical models. We derived a negative binomial distribution for earthquake numbers, as a substitute for the Poisson distribution. The fractal correlation dimension for spatial distributions of earthquake hypocenters was determined. We also investigated the disorientation of earthquake focal mechanisms and showed that it follows the rotational Cauchy distribution. We evaluated the parameters of these distributions in various earthquake zones, and estimated their systematic and random errors. These statistical and mathematical advances made it possible to produce quantitative forecasts of earthquake occurrence.

These statistical and mathematical advances made it possible to produce quantitative forecasts of earthquake occurrence. The theoretical foundations for such forecasts based on multidimensional stochastic point processes were first proposed by Kagan (1973). Later we showed how the long- and short-term forecasts can be practically computed and how their efficiency can be estimated. Since 1999, daily forecasts have been produced, initially for several seismically active regions and more recently expanded to cover the whole Earth. The recent mega-earthquake in Tohoku, Japan, which caused many deaths and very significant economic losses, demonstrates the importance of forecasts in terms of a possible earthquake size, its recurrence time, and temporal clustering properties.

An important issue in the study of earthquake occurrence and seismic hazard is the verification of seismicity models. Until recently seismic event models and predictions were based exclusively on case histories. It was widely believed that long-term earthquake occurrence, at least for large earthquakes, is quasi-periodic or cyclic (seismic gap and characteristic earthquake hypotheses). The Parkfield earthquake prediction experiment and many other forecasts were therefore based on these models. However, when we tested the seismic gap models against the earthquake record, it turned out that the performance of the gap hypothesis was worse than a similar earthquake forecast (null hypothesis) based on a random choice (temporal Poisson model). Instead of being quasi-periodic, large earthquakes are clustered in time and space. The Tohoku event consequences underscore that all statistical properties of earthquake occurrence need to be known for correct prediction: the extent of losses was to a large degree due to the use of faulty model of characteristic earthquakes to evaluate the maximum possible earthquake size.

Earthquake occurrence models that are too vague to be testable, or are rejected by rigorous objective statistical tests (see above) are not discussed in detail here. In our opinion, the only models worthy of analyses, are those which produce testable earthquake forecasts.

Since this book is an initial attempt to thoroughly and rigorously analyze earthquake occurrence, many unresolved issues still remain. In the final Section we list some challenging questions that can now be addressed with thorough theoretical studies and observational statistical analysis. There is, of course, the possibility that some of these problems have been solved in other scientific disciplines; in this case, we will need to find out how to implement these solutions in earthquake science.

Key Words
Probabilistic forecasting, Probability distributions, Earthquake interaction, forecasting, and prediction, Seismicity and tectonics, % Computational seismology, % Theoretical seismology, Statistical seismology, Dynamics: seismotectonics,

Citation
Kagan, Y. Y. (2013). EARTHQUAKES: Models, Statistics, Testable Forecasts. Hoboken, USA: Wiley/AGU. doi: 10.1002/9781118637913.