2012 SCEC Source Inversion Validation Workshop
Conveners: P. Martin Mai, Danijel Schorlemmer, and Morgan Page
Dates: September 9, 2012 (08:00-12:00)
Location: Hilton Palm Springs Resort, Palm Springs, CA
Participation: view list of registrants
SCEC Award and Report: 12230
The Source Inversion Validation (SIV) group conducted its 2012 workshop in conjunction with the Annual SCEC meeting in Palm Springs (Sept 9-12, 2012). This was the 7th SIV-workshop since 2008, gathering scientists working on earthquake source inversion to discuss methods and approaches to source inversion, to share the latest results related to the SIV exercises, and to discuss the continuation of the SIV project.
There were approximately 50 participants in attendance during the 4-hr workshop. The detailed program of the workshop is given below. The “Notes” below summarize the presentation and subsequent discussion of the workshop. The main outcome of the 2012 SIV workshop was a detailed plan for how to attract external funding (for both US and non-US scientists) to support and motivate SIV-participating researchers (and their teams). It was also decided to try to rapidly write a joint collaborative paper on the SIV project to create visibility and attract further research groups to participate, but also to raise awareness in the seismology and earthquake-engineering communities regarding uncertainty quantification in earthquake source inversion.
The SIV-project is making continuous progress. We hope that our above activities (seeking funding; publishing a paper) will help to accelerate the pace of the SIV-efforts.
Notes from 2012 SIV workshop: presentations and discussions
First, the latest results from the crack-like dynamic rupture benchmark were presented. Some models did not achieve an overall good visual fit to the in input model. While the models had similar rupture times to the input model, rise times were poorly determined and the slip patterns were markedly different. One modeler’s synthetics showed reversed phases on the vertical seismograms, due to modeler error. The slip rate functions in the dynamic input model are more complex than is typically assumed by modelers, so this is one potential area for improvement. This aspect in fact suggest to include the complete space-time history of the rupture models in the quantitative comparison, as the inversion approaches make different assumptions on the local slip-rate function. Hence, quantitative comparisons suffer from different parameterizations to capture the slip-rate function, which can be avoided by including the full space-time rupture model.
Specific comments to this presentation:
Next, the new website was presented (available at equake-rc.info). It contains improved and flexible online tools for model submission and comparison. Martin Mai also presented a proposal for the next SIV benchmark exercise, a kinematic rupture within a heterogeneous structure leading to seismic scattering. The velocity model in this exercise would be a layered structure with 5% random density fluctuations. Slip and rise-time would be variable and there would be minor rupture velocity variations.
- Ruth Harris thought that the dynamic model was too complex and heterogeneous for an early blind test, and that we should start with simpler models, as they learned was helpful in the dynamic code validation exercise.
- Yoshi Kaneko disagreed; he thought the peak slip can never be resolved because it is a high-frequency feature, and even a homogeneous initial stress would not help the modelers to resolve high slip rates coming from a dynamic model.
- There was much discussion about the frequency bands in which modelers are truly obtaining a good fit. Metadata could be added to modeler submissions; this could include information on how modelers invert the data, such as which frequency band of the data they use. Also, the misfit metrics presented should be available in different frequency bands, so that we can distinguish which modelers are resolving details at higher frequencies.
Specific comments to this presentation:
Subsequently, Zacharie Duputel presented recent developments using W-phases in rapid- response source inversions. They can achieve good fits to focal mechanism, moment, and location within 6-12 minutes of the event for regional solutions and within 20-35 minutes of the event for global solutions. Their inversions are sensitive to only first-order features, but are useful for evaluating tsunami hazard quickly after the event. They have validated their algorithm on over 800 M≥6.5 earthquakes since 1990, and for 99% of the events are within 0.2 magnitude units of the Harvard CMT solution. For the Tohoku earthquake, retrospectively they could get a M9.0 solution within 20 minutes. Real-time implementations of their algorithm are currently running at NEIC and the Pacific Tsunami Warning Center; more locations are in the works. In their algorithm they take into account observational errors from station noise and the oversampling of the waveform (that is, the fact that nearby points on the waveform are not independent data). They use a non-deterministic theory that allows for “fuzziness” in predictions. Uncertainties are added to point-source locations; by taking these uncertainties into account, they can recover the correct moment tensor even if the location is incorrect. In summary, they have found that a more realistic uncertainty analysis leads to an improvement of the solution itself.
- It was suggested to include a link from the SCEC SIV website to this new web page.
- Bill Ellsworth suggested to add a second stage of the exercise which would give the exact Green’s functions to the modelers, thus allowing us to isolate the effect of the velocity structure uncertainty.
- Participants thought adding noise to the velocity structure was a good idea, and more adequately represented the noise sources present in real inversions than adding random noise to seismograms.
- Chen Ji pointed out the difference between random noise and coherent noise.
- Jean-Paul Ampuero thought the 3D velocity model was an important improvement, and that the same amount of other information should be provided to the modelers to isolate this effect.
- It was suggested that the velocity structure noise should be brought to a physically realistic level.
- Ralph Archuleta suggested adding velocity structure variations with depth by perturbing the layers vertically, perhaps in another test, as this is a more important variation for low- frequency inversions up to 1 Hz.
Next, Martin Vallée presented a method for rapid extraction of seismic source properties based on teleseismic body-wave data. The resolution of teleseismic inversions is particularly poor for shallow earthquakes, where two events with very different moment can have similar waveforms. This resolution problem is well-understood for the teleseismic problem and is discussed in Menke (1985). Multiple events can also be a problem, since body waves from one earthquake will be hidden within the surface waves of the other. Due to these resolution problems, it is best to add other types of data or obtain only the gross features of the rupture. In their inversions, they focus on recovering the relative source time functions, which are better resolved and can be used to determine the number of isolated asperities in the rupture.
In the following talk, Chen Ji presented results from his method of quickly imaging large earthquakes, emphasizing the trade-offs present between quickness and accuracy. He said it was very important to finely sample the Green’s functions (more finely, than, say, the model parameters are discretized on the fault plane). His team is also working on a multiple double- couple (MDC) inversion technique, to include uncertainty analyses with their solutions. In the MDC inversions, geometry errors tend to make the solution appear non-double-couple. The next step is to go from the MDC inversions to a traditional inversion with several defined faults. Chen emphasized that high variance reductions do not imply a match to moment, peak slip, or high-frequency data. Predictions of surface motions, PGA, and PGV can be more reliable than the source details of models.
Finally, Henriette Sudhaus gave a presentation on rigorous testing and automatization of inversion models. The idea is to set up a testing center which would run the codes automatically with minimal tuning from the modelers. Many tests could be run to develop statistics on how the station distribution and source details affect inversion results. One goal of the testing center would be transparency, both of the models and the testing methods.
Specific comments to this presentation:
Subsequently, the participants engaged into an open-discussion on all aspects oft he SIV-project. Some of the comments and suggestions are listed below:
- John Anderson said that predicting amplitudes of waveforms is a good test for engineering applications. Peak ground amplitudes are not sensitive to rupture details, but predicting them is important for engineering applications.
- David Wald discussed the time dependence of resolution of inversion models in terms of the time to development. Models take time to stabilize, and early models are typically only used for basic rupture information (e.g., overall dimensions, whether the source is tsunamigenic). More sophisticated uses (seismic coupling, stress change models) require more development. Is it our purpose to determine the ultimate resolving power of inversions, and by focusing on automated models are we under-estimating this?
- There was much discussion as to if/when the SIV project to attempt to move to a testing center platform. Currently inversion methods are not fully automated. The testing center could help to determine how well we can do with what data we currently have – we can look, for example, at statistics of similar test earthquakes to a given real earthquake situation (i.e., the amount of data, station geometry, fault geometry).
- Frantisek Gallovic suggested that inverters provide multiple solutions and/or uncertainty analyses with their solutions.
- Martin Mai responsed that the uncertainty analysis would automatically be part of the tests – with enough statistics from the testing center runs we would have a better idea of the resolving power of the inversions. The testing center will also help us to determine what causes the differences between the models.
- Moving to the testing center would make the current individual benchmark approach obsolete. Several participants thought, however, that moving to the testing center was premature.
To close the meeting, small groups were formed to brainstorm on funding, the testing center, and testing metrics and future benchmarks. It was decided to continue the benchmark approach for a few years, with plans to work towards a testing center in the long term. A possible SIV workshop in conjunction with the SSA meeting in Utah was discussed. Paul Segall gave a closing remark on the rupture models used in the ground motion simulation validation exercise. They are using the SCEC broadband simulation platform to generate a suite of models; the model that fits known ground motions best will be used. This points to a possible future collaboration for the SIV group.
- Bill Ellsworth said we should carefully frame the questions we expect the SIV exercise to answer. He suggested such questions should be: 1) prediction of strong ground motions, 2) static offset on a fault, 3) the rupture history (e.g., if the rupture is supershear), and 4) source physics (e.g., the source-velocity function at a point).
- Dave Jackson added that we should focus both on the practical things inversions can tell us (like seismic moment, rupture endpoints, and tsunamigenic potential) and articulate practical goals like investigating segmentation and elastic rebound.
- Chen Ji spoke against models that use extremely simple (e.g., blocky) sources. A course parameterization, he said, changes the structure of the null space, and that error propagates to the model parameters. He prefers a more realistic representation with tiny subfaults, which reduces the error imposed by the parameterization itself.
- Before moving to a testing center, Yoshi Kaneko said we need to address resolution questions from existing benchmarks. We don’t understand why the models for the current benchmarks are getting such different results, and to what extent these differences are due to fundamental resolution problems vs. modeler (for example, Green’s function) errors. The big challenge is that we need more modelers; currently there are only four modelers who have submitted results for the dynamic benchmark. The main problem is that we need funding to incentivize researchers to participate (unfortunately, the testing center would require even more financial and logistic support).
- There was much discussion of the amount of funding required to obtain more participation, and possible funding sources. Ruth Harris, who has obtained very good levels of participation in the dynamic code validation exercise, said that half (the non-US portion) of her modelers are working for free, and the remaining have received approximately $5000 each.
- Guangfu Shao said that publications are needed to encourage modeler involvement. Co- authorship on a group paper would encourage grad students, for example, to participate. Ralph Archuleta countered that to have a student work on the SIV project, he would need enough funds to pay for a grad student salary for one academic quarter.
- Terry Tullis said that we really need a critical mass of researchers participating, and then remaining modelers will have to participate in order for their results to be believed. He also estimated that ~$150K are needed to fund 5-7 researchers, which is too large for SCEC or NEHRP alone. An NSF proposal might be the way to go
- Ruth Harris emphasized how important SIV results are to many subfields, such as ground motion, earthquake source physics, and earthquake recurrence – these can be used as selling points in order to obtain funding.
- Martin Mai mentioned a possible funding source that he has discussed with Norm Abrahamson.
Conclusion and decisions
The following main points emerged from the 2012 SIV workshop during the SCEC Annual Meeting:
SEPTEMBER 9, 2012
- Seek funding to support research teams: the SIV team will apply for SCEC funding for a 2013 workshop, and try to obtain financial support for individual research teams through SCEC (depending on possibly available new grants to SCEC). Additionally, a small team from California will seek NSF-funding to support US-scientists and to hold a dedicated SIV workshop
- Collaborative paper: the SIV team will try to quickly write a first paper on the current and past findings of the SIV-test, to alert the seismological community about this project and attract more participating groups, but also to disseminate the need for rigorous uncertainty quantification in earthquake source inversion.
- Future benchmarks: the SIV-team will generate additional benchmark tests, for dynamic and kinematic ruptures, embedded in simple and complex velocity structures. Additional quantitative measures for evaluating the quality / reliability of earthquake rupture models will be developed, and used to compare solutions with the input rupture model.
||Introduction and overview of workshop goals [pdf]
||Current SIV benchmarks and results [pdf]
||Recent developments in source inversion using the W-phase [pdf]
||Rapid extraction of seismic source properties - strengths and limitations of teleseismic body-wave data [pdf]
||On rapid automated finite-fault inversions [pdf]
||Guangfu Shao, Chen Ji
||Quantifying the quality of kinematic source optimizations through rigorous testing and automatization [pdf]
- Current and future SIV benchmarks
- Towards and SIV testing center
- General SIV strategy and funding
Return to 2012 SCEC Annual Meeting