Much of our work was devoted to the Phase 3 report ("Probabilistic Seismic Hazard in Southern California; Uncertainties due to Assumptions and Models," to be submitted, 1998), now in draft form. Jackson wrote the text for Chapter 2, Seismic Sources, while Jackson, Kagan, Shen, Ge and Potter contributed data and analysis. The purpose of our work on Phase 3 is to provide a suite of alternate source models to explore the range of seismic hazard estimates consistent with available data. This work grew out of the results of the Phase 2 Report, based on the best available methods for estimating earthquake probabilities from geological, seismological, and geodetic data. The methods used in Phase 2 predicted a higher rate of earthquakes than observed since 1850, suggesting that either the model over-predicts, or the earthquake rate since 1850 has been anomalously low. In Phase 3 we developed the Phase 2 model more carefully, and provided some alternate models that are consistent with the historic catalog as well as the total seismic moment rate for southern California.
One model in the Phase 3 Report is essentially identical to that used by the CDMG and USGS for the national seismic hazard map (Petersen et al., 1996; Frankel et al., 1996). In southern California it is based largely on the Phase 2 report (Jackson et al., 1995). This model satisfies the total moment rate estimate, but like the Phase 2 model it predicts a larger rate of earthquakes than observed. In a modification, we used the same slip rates and segment geometry as the CDMG/USGS model, but we introduced a cascade strategy that replaces more intermediate sized earthquakes with fewer, larger ones. This model still satisfies the total moment rate with fewer earthquakes, but it still over-predicts relative to the historical catalog.
Two alternative models satisfied both the observed seismic moment rate and the historic catalog. They both assume a simple "truncated Gutenberg-Richter" magnitude frequency distribution with three parameters: the rate of small earthquakes (a-value) , the logarithmic slope (b-value) of the magnitude density function, and the maximum magnitude. We assumed that the b-value and maximum magnitude are the same everywhere, and that the a-value varies geographically. In one model the a-value is proportional to smoothed seismicity from the historical catalog, and in another it is proportional to the maximum shear strain rate determined from geodetic data. We take the b-value to be 0.9, estimated from the earthquake catalog. The maximum magnitude can then be estimated from the total seismic moment rate, and the total rate of small earthquakes. For a moment rate of 1.2*10^19 Nm/yr, and a rate of magnitude 6 and larger earthquakes of 0.47/yr, the implied maximum magnitude is about 8.2. While this value exceeds the magnitude of any historic earthquake, it is consistent with the empirical relationships between magnitude and fault length (for example, Wells and Coppersmith, 1996; Pegler and Das, 1996) and the lengths of potential ruptures on the San Andreas or other faults in southern California (up to 600 km). Figure 1 shows the magnitude distributions implied by the various models, and by the earthquake catalog.
The two models that are consistent with the historic catalog are not based on presumed rupture of any particular fault segments, nor do they assume that future earthquakes are even associated with known faults. Instead, earthquakes are assumed spatially distributed, with focal mechanisms similar to those of historic earthquakes nearby. In future studies we plan to construct a detailed source model that matches the historic earthquake rate and the total moment rate, while associating at least half of the earthquakes to known faults.
So far, all of the earthquake recurrence models for southern California that satisfy both the moment rate and the catalog within their uncertainties predict rare earthquakes of magnitude 8 or larger. We have begun to explore the consequences that such earthquakes would have for seismic hazard estimation (Jackson and Kagan, 1997) and for Earthquake Insurance (Kagan, 1997c).
As part of our work on the Phase 3 report, we also produced a revised earthquake catalog for southern California. The catalog is believed complete since 1850 for magnitude 7; since 1870 for magnitude 6.8; since 1880 for magnitude 6.5; since 1890 for magnitude 6.0; since 1910 for magnitude 5.3; and since 1925 for magnitude 5.0 and above. For each earthquake in the catalog we estimated a focal mechanism, and for the larger earthquakes we estimated "sub-quakes" with distributed slip along the causative fault, and having a combined moment equal to that of the cataloged earthquake.
Jackson and Kagan have joined with Geller and Mulargia (Geller et al., 1997a,b; Kagan 1997d,e; Kagan and Jackson, 1997; Jackson and Kagan, 1997b) to initiate a public debate on the predictability of earthquakes. In a widely publicized Perspective in Science magazine, we argued that efforts to predict individual earthquakes within useful time and distance windows have not succeeded, and there are no recognizable earthquake precursors with predictive power. The fact that earthquakes result from instabilities in a highly nonlinear system will make useful prediction of individual earthquakes extremely unlikely for the forseeable future. Some indicators of unusual earthquake potential may eventually be discovered, but they will most likely result in very modest conditional earthquake probabilities. At present, it is still a challenge to estimate and test unconditional probabilities of earthquakes within regions the size of southern California.
The failure of the predicted Parkfield earthquake to occur
on time presents a challenge to the characteristic earthquake
model, one of the cornerstones of recent seismic hazard evaluations
(including the Phase 2 report, and the CDMG/USGS hazard map).
We proposed an explanation, and an estimate of the yearly probability
of a moderate earthquake at Parkfield (Kagan, 1997b; Jackson and
Kagan, 1997c). Our model is that moderate earthquakes can occur
almost anywhere on the San Andreas, and that Parkfield was by
chance the location of more than the expected number of quakes.
According to this model, the expected rate of earthquakes is best
estimated using a regional seismicity model, rather than extrapolating
from a selected list of earthquakes. We estimate the annual probability
of a moderate earthquake at Parkfield to be a few percent.
Frankel, A., Mueller, C.; Barnhard, T.; Perkins, D.; Leyendecker, E. V.; Dickman, N.; Hanson, S.; and Hopper, M., 1996. National seismic-hazard maps; documentation June 1996, U. S. Geological Survey OpenFile Report F96-0532.
Jackson, D. D., Aki, K., Cornell, C. A., Dieterich, J. H., Henyey, T. L., Mahdyiar, M., Schwartz, D., Ward, S. N., (Working group on the probabilities of future large earthquakes in southern California), 1995. Seismic hazards in southern California: Probable earthquakes, 1994-2024, Bull. Seism. Soc. Am., 85, 379-439.
Pegler, G., and S. Das, Analysis of the relationship between seismic moment and fault length for large crustal strike-slip earthquakes between 1977-92, Geophys. Res. Lett., 23, 905-8, 1996.
Petersen, M. D.; Bryant, W. A.; Cramer, C. H.; Cao, T.; Reichle, M. S.; Frankel, A. D.; Lienkaemper, J. J; McCrory, P. A., and Schwartz, D. P., 1996. Probabilistic seismic hazard assessment for the state of California, U. S. Geological Survey Open File Report OF 96-0706.
Wells, D. L., and K. J. Coppersmith, 1994. New empirical relationships among magnitude, rupture length, rupture width, rupture area, and surface displacement, Bull. Seismol. Soc. Amer., 84, 974-1002.
Publications resulting from this project:
Kagan, Y. Y., 1997a. Comment (Review 25.1) on "Application of the concentration parameter of seismoactive faults to Southern California" by A. Zavyalov and R. E. Habermann, Pure Appl. Geoph. (PAGEOPH), 149, 137-144, (SCEC #183).
Geller, R. J., D. D. Jackson, Y. Y. Kagan, and F. Mulargia, 1997a. Earthquakes cannot be predicted, Science, 275, 1616-1617, (SCEC #404).
Kagan, Y. Y., 1997b. Statistical aspects of Parkfield earthquake sequence and Parkfield prediction experiment, Tectonophysics, 270, 207-219, (SCEC #291).
Geller, R. J., D. D. Jackson, Y. Y. Kagan, and F. Mulargia, 1997b. Response -- Cannot earthquakes be predicted?, Science, 278, 488-490, (SCEC #405).
Kagan, Y. Y., 1997c. Earthquake size distribution and earthquake insurance, Communications in Statistics: Stochastic Models, 13(4), 775-797, (SCEC #289).
Kagan, Y. Y., 1997d. Are earthquakes predictable?, Geophys. J. Int., 131, in press, (SCEC #367).
Abstracts and reports resulting from this project:
Kagan, Y. Y., 1997e. Are earthquakes predictable?, Seismological Research Letters, 68(2), p. 296.
Jackson, D. D., and Y. Y. Kagan, 1997a. Earthquake recurrence and huge earthquakes, Seismological Research Letters, 68(2), p. 299.
Kagan, Y. Y., and D. D. Jackson, 1997. Earthquakes precursors: are they useful?, Eos Trans. AGU, 78(17), Spring AGU Meet. Suppl., p. S213, (invited).
Jackson, D. D., and Y. Y. Kagan, 1997b. Credibility standards for candidate quake precursors, Eos Trans. AGU, 78(17), Spring AGU Meet. Suppl., p. S207.
Jackson, D. D., and Y. Y. Kagan, 1997c. The Parkfield probability problem, Eos Trans. AGU, 78(17), Spring AGU Meet. Suppl., p. S218 (invited).
List of captions
Figure 1. Magnitude-frequency distributions for models in the Phase 3 Report. SR and SS refer to the models in which earthquake probability is proportional to strain rate and smoothed seismicity, respectively. CDMG refers to the joint CDMG/USGS hazard model. CP is adapted from the CDMG model by including more cascades.
Figure 2. Map showing distribution of sources, with their focal mechanisms, used to assess the hazard from historical seismicity in the Phase 3 Report. Larger earthquakes are represented as sums of smaller earthquakes having the same total moment.