Exciting news! We're transitioning to the Statewide California Earthquake Center. Our new website is under construction, but we'll continue using this website for SCEC business in the meantime. We're also archiving the Southern Center site to preserve its rich history. A new and improved platform is coming soon!
Home  /  SCEC Community Research  /  Working Group: Computational Science

Working Group: Computational Science

GROUP LEADERS
PAST RESEARCH RESULTS
MEETINGS & WORKSHOPS
Item 1
Item 2
Item 3
RELATED RESEARCH
Item 1
Item 2
Item 3

Leader: Yifeng Cui
Co-Leader: Eric Dunham

Research Objectives

The Computational Science group promotes the use of advanced numerical modeling techniques and high performance computing (HPC) to address the emerging needs of SCEC users and application community on HPC platforms. The group works with SCEC scientists across a wide range of topics to take advantage of rapidly changing computer architectures and algorithms. It also engages and coordinates with national HPC labs/centers and vendors in crosscutting efforts enabling large-scale computing milestones. The group encourages research using national supercomputing resources, and supports students from both geoscience and computer science backgrounds to develop their skills in the area. Projects listing Computational Science as their primary area should involve significant software-based processing or high performance computing in some way; research utilizing standard desktop computing should list the most relevant non-Computational Science disciplinary or focus group as the primary area.

Research Strategies

  • Reengineering and optimizations of HPC codes, required to reach SCEC research goals, for parallel systems with multi-core processors, GPU accelerators and/or Xeon Phi coprocessors, with emphasis on issues such as performance, portability, interoperability, power efficiency and reliability.
  • Novel algorithms for earthquake simulation, particularly those that either improve efficiency and accuracy, or expand the class of problems that can be solved (e.g., adaptive mesh refinement).
  • Optimization of earthquake-cycle simulators that can resolve the faulting processes across the range of scales required to investigate stress-mediated fault interaction, including those caused by dynamic wave propagation, generate synthetic seismicity catalogs, and assess the viability of earthquake rupture forecasts.
  • Tools and algorithms for uncertainty quantification in large-scale inversion and forward-modeling studies, for managing I/O, data repositories, workflow, advanced seismic data format, visualization and end-to-end approaches.
  • Data-intensive computing tools, including but not limited to InSAR and geodesy, 3D tomography, cross-correlation algorithms used in ambient noise seismology, and other signal processing techniques used, for example, to search for tectonic tremor.

Recent Results

The Computational Science Disciplinary Group promotes the use of advanced numerical modeling techniques and high performance computing (HPC) to address the emerging needs of SCEC users and application community on HPC platforms.

This past year’s accomplishments include:

  1. Both static and adaptive mesh refinement have been used to efficiently obtain highly accurate solutions to rupture dynamics problems. An initial FD discontinuous mesh implementation is being tested for stability.
  2. Several GPU-based codes have accelerated High-F, CyberShake, and high-order DG simulations, which helped generate CyberShake 15.4.
  3. Both scattering and intrinsic attenuation reduce seismic wave amplitudes.
  4. Inelastic material response, in both the near-fault and near-surface regions, is demonstrated to substantially decrease ground motion.
  5. A rotationally invariant, 3D version of the stress relaxation equations is developed based on the rate-state evolution equation for earthquake simulator RSQSim.
  6. A new finite difference method was introduced to study earthquake sequences in heterogenous media, accounting for both viscoelastic and plastic off-fault material response.
  7. Multi-HPC systems are used for user-driven validation studies, which brings predicted ground motions into closer agreement with observations.
  8. Data-intensive HPC techniques were applied for earthquake detection in continuous waveform data.
Figure 1. Summary results and analysis of simulations for the Mw 5.4 2008 Chino Hills earthquake using different velocity models (CVM-S and CVM-H) and showcasing the connection from geoscience modeling to engineering applications. The top row shows results corresponding to the simulation done using CVM-S, while the bottom row shows those corresponding to CVM-H. Each column from left to right shows: (1) The surface shear wave velocity for each model. 3D meshes built for these simulations consist of up to 15 billion finite elements. (2) The simulation results for the surface horizontal peak ground velocity. The star indicates the epicenter location. (3) Validation results using goodness-of-fit metrics to compare synthetics to data. In this study we used over 300 recording stations. GOF scores closer to 10 (lighter colors) indicate a better fit with the data. (4) Comparison with attenuation relationships used in engineering to estimate peak ground velocity. The red line corresponds to the actual trend from earthquake data, the two black lines indicate an upper and lower bound based on empirical relationships, and the green line shows the trend of the surface results from the simulation, which are shown as a gray cloud of points on the background. Simulations were done using Hercules by Taborda and Bielak (2013, 2014).
Figure 2. Fourier amplitude as a function of distance centered at 2.25 Hz using 100+ strong motion stations for the 2008 Mw5.4 Chino Hills, CA, earthquake. Dots depict values for individual stations and lines depict a 5-point moving average. Rrup indicates the closest distance to the ruptured surface of the fault plane. From Withers et al., 2015.
Figure 3. (left)The histograms of RWMs for ambient noise Green's functions for the initial model (CCA00) and the updated model (CCA05). (right)The histograms of frequency dependent group delay measurements (dtg) for ambient noise Green's functions for the initial model (CCA00) and the updated model (CCA05).
Figure 4. Peak ground velocities for the Los Angeles region obtained from dynamic simulations of a M 7.8 earthquake rupturing the southern San Andreas fault from SE to NW. Simulations were done on OLCF Titan for (a) a visco-elastic medium and (b) an elasto-plastic (nonlinear) medium. As shown in the ratio map (c), the reduction in peak ground velocity by nonlinearity is up by 50% for the ShakeOut scenario (Roten et al., 2014, 0215).

Dynamic Rupture Simulations

Advances in HPC center around source physics, in particular fault geometric complexity as the origin of variability in slip and rupture velocity that contribute toward the generation of incoherent high-frequency radiation from earthquakes. Ensemble dynamic rupture simulations, involving thousands of realizations of the stochastic fault geometry, were introduced to quantify the range of stress levels at which earthquakes will occur, with contributions to resistance coming from both friction and from fault geometric complexity. Correlations between fluctuations in slip and rupture velocity were linked to the local fault geometry, offering a new procedure for generating pseudo-dynamic rupture histories (Trugman and Dunham, 2014) for use in more efficient reciprocity-based ground motion simulations. Additionally, the short spatial and temporal scales over which fault strength and slip rate vary near the rupture front motivates the introduction of a highly refined mesh that tracks the rupture front (and other sharp features like wavefronts). Both static and adaptive mesh refinement were first applied to rupture dynamics problems during SCEC4 (Kozdon and Wilcox, 2014; Pelties et al., 2014; Kozdon and Dunham, 2015), and show great potential for future high-resolution modeling studies. The Dynamic Rupture Code Comparison Group has tested several codes participating against benchmark exercises that incorporate a range of features, including single and multiple planar faults, single rough faults, slip-weakening, rate-state, and thermal pressurization friction, elastic and visco-plastic off-fault behavior, complete stress drops that lead to extreme ground motion, heterogeneous initial stresses, and heterogeneous material (rock) structure. The group’s goals are to make sure that when our earthquake-simulation codes simulate these types of earthquake scenarios along with the resulting simulated strong ground shaking, that the codes are operating as expected. This year's benchmarks focused on ruptures in layered and depth-dependent material structures, and ruptures on nonplanar faults with and without off-fault plasticity.

OpenSHA/UCERF3 Development

Kevin Milner and Thomas Jordan continue to develop OpenSHA, an open-source, Java-based platform for conducting SHA. This development transform the results of SCEC science into practical products like UCERF3. Recently, supercycles and synchronization signatures are analyzed in synthetic seismic sequences. Synchronization is a key concept in nonlinear dynamics. UCERF3 does not explicitly model supercycles, but they emerge from long runs of physics-based rupture simulators, such as the RSQSim model and the ALLCAL model. In these models, the synchronization of large events on different fault sections leads to variations in seismic energy release of ± 50% on time scales of about 200 years. Spectral analysis of a million-year RSQSim catalog shows synchronization harmonics with a fundamental period of 200 years and a corresponding depletion at longer event periods. This synchronization signature is absent in UCERF3 and randomized versions of the RSQSim catalog. Further investigation of synchronization and its time dependence using two-dimensional “recurrence plots” have been conducted to map the temporal recurrence of proximate RSQSim states. The results are used to speculate on the hazard implications of the supercycle hypothesis.

Accelerating Dynamic Rupture and Wave Propagation Simulations

Progress has been made in accelerating dynamic rupture and wave propagation simulations using GPUs. Hercules-GPU is a CUDA-based implementation, the stiffness contributions, attenuation contributions of the BKT model, and the displacement updates are implemented entirely on the accelerator using the CUDA SDK. This GPU code was used for La Habra validation exercise on OLCF Titan and achieved a factor of about 2.5x speedup with respect to the CPU code. The GPU version of AWP-ODC is used in recent CyberShake 15.4, the first 1-Hz seismic hazard map for LA region, which saved nearly 80% of SGT calculation time. Jeremy Kozdon and his team have developed a GPU-enabled high-order discontinuous Galerkin FE code for earthquake rupture dynamics based on quadrilateral and hexahedral elements. This approach is capable of handling both adaptivity in order (known as p-adaptivity) and well as adaptivity in element size (known as h-adaptivity). The extension of the numerical approach is enabled through the use of the OCCA library, an abstraction of several offloading paradigms for fine-grained, on-node parallelism. The CPU+GPU+MPI implementation currently includes elastodynamics with slip weakening friction and has shown almost-ideal weak-scaling across 32 NVIDIA Titan Black GPUs. This implementation is being validated including adding dynamic mesh adaptivity.

Computational Developments of Earthquake Simulators

A form of off-fault stress relaxation, based on rate-state seismicity equations, has been developed by James Dieterich’s team at UCR to resolve several problems associated with geometrically complex faults in elastic media. Slip on geometrically complex faults in elastic media produces fault interaction stresses that non-physically grow without limit. These stresses in turn suppress fault slip, break the linear slip vs. length scaling for ruptures, and result in non-convergent solutions as model resolution increases. They developed a rotationally invariant, 3D version of the stress relaxation equations based on the rate-state evolution equation. This involves calculating the inner product of 3D stress tensors with reference stress tensors (set by steady-state stability conditions), and employing the scalar results in the stress relaxation equations. This generates results similar to, but more general than, previous work that used shear and normal stresses resolved onto a reference plane for the equations. Earthquake simulators typically use the boundary element method to compute static elastic stress changes due to slip on faults. Faults can be discretized using either rectangular or triangular elements, and there was previously a widespread view that triangular elements, which can cover a fault surface without gaps, would be more accurate. However, an extensive set of quantitative tests by Barall and Tullis has demonstrated that this is not always true; there are many cases, depending on fault curvature, where rectangular elements are more accurate. Their work will help guide the development of more accurate earthquake simulation tools.

FD Discontinuous Mesh Implementation

Finite-difference discontinuous grid implementations suffer inherently from stability problems due to the nature of exchange of wavefield information between the fine and coarse grids. In particular, staggered grids, where analytical stability conditions are less tractable, provide a challenge. The cause of instability is likely related to down-sampling of the wavefield from the fine grid into the coarse grid, and possibly the interpolation to obtain the wave field when transferring the wave field from coarse to fine grids. The preliminary analysis by Kim Olsen and his group at SDSU suggests that stability is affected by several factors, including media properties, spatial dimension, the presence of absorbing boundaries, and anelastic attenuation.

SEISM Tools

SEIMS-IO is designed with highly condensed, easy-to-understand APIs for users to choose. This library simplifies the programming of parallel I/O, with an interface hiding complex low-level operations. To accommodate the generalized interface, the earlier SEISM-IO library is modified to integrate different initialization/open/write processes in MPI-IO, HDF5, PnetCDF and ADIOS. The generalized interface has been tested using the wave propagation AWP-ODC solver on the NSF TACC Stampede system, the library has been used in the latest ShakeOut simulations by Daniel Roten. Scott Callaghan et al. have optimized CyberShake workflow, which automates and manages I/O and enable remote job execution on HPC systems. The enhanced workflow execution is efficiently split across multiple HPC systems, and previous heavy I/O workload from/to HPC parallel file systems is significantly reduced to achieve optimal performance. Charles Williams and Laura Wallace have developed a workflow for using PyLith-Generated Green’s Functions with the Defnode Geodetic Inversion Code. The workflow allows to perform the necessary tasks for both SSE inversions and interseismic coupling inversions in a semi-automated way.  

Efficient Similarity Search for Continuous Waveform Data

Continuous seismic waveform data offers a wealth of information, but many events go undetected with current methods. Template matching requires prior selection of event waveforms, and alternative cross-correlation methods are extremely computationally expensive. Yoon et al. (2015) have applied similarity search techniques developed by computational scientists to massive earthquake data sets for the first time. The method distills waveforms into sparse, binary fingerprints, enabling a hierarchical search across these fingerprints. In most cases, the method has detection capabilities comparable to cross-correlation, but with vastly smaller computational cost. This new approach will enable study of data sets that are simply impossible to analyze with current methods, opening a new era of seismic monitoring.

Select Publications

  • Erickson, B. A. and Day, S. M. (2015), Bimaterial Effects in an Earthquake Cycle Model using Rate-and-State Friction, submitted to Journal of Geophysical Research. SCEC Contribution 6043
  • Erickson, B. A. and E. M. Dunham (2014), An efficient numerical method for earthquake cycles in heterogeneous media: Alternating sub-basin and surface-rupturing events on faults crossing a sedimentary basin, Journal of Geophysical Research, 119(4), 3290-3316, doi:10.1002/2013JB010614. SCEC Contribution 1796
  • Isbiliroglu, Y., Taborda, R., and Bielak, J. (2015). Coupled soil-structure interaction effects of building clusters during earthquakes. Earthquake Spectra. Vol. 31, No. 1, 463-500, Feb 2015. SCEC Contribution 1676
  • Kozdon, J. E., and L. C. Wilcox (2014) Provably Stable, General Purpose Projection Operators For High-Order Finite Difference Methods, submitted.
  • Kozdon, J. E., and E. M. Dunham (2015) Adaptive mesh refinement for earthquake rupture simulations, in preparation for Geophys. Res. Lett. SCEC Contribution 6047
  • Meng, X. and Z. Peng (2014), Seismicity rate changes in the San Jacinto Fault Zone and the Salton Sea Geothermal Field following the 2010 Mw7.2 El Mayor-Cucapah Earthquake, Geophys. J. Int., 197(3), 1750-1762, doi: 10.1093/gji/ggu085. SCEC Contribution 1814
  • Pelties, C., A. A. Gabriel and J.-P. Ampuero (2014), Verification of an ADER-DG method for complex dynamic rupture problems, Geosci. Model Dev., 7, 847-866, doi:10.5194/gmd-7-847-2014. SCEC Contribution 6046
  • Poyraz, E., Xu, H. and Cui, Y., Application-specific I/O Optimizations on Petascale Supercomputers, Proceedings of International Conference on Computational Science, Elsevier, Vol. 29, 910-923, Cairns, June 10-12, 2014. SCEC Contribution 2036
  • Roten, D., K.B. Olsen, S.M. Day, Y. Cui and D. Faeh, Expected seismic shaking in Los Angeles reduced by San Andreas fault zone plasticity, Geophysical Research Letters, 41, No. 8, 2769-2777, doi:10.1002/2014GL059411, 2014. SCEC Contribution 2102
  • Small, P., R. Taborda, J. Bielak & T. Jordan (2014), GPU Accleration of Hercules, SCEC Annual Meeting, Palm Springs, Sept 6-10.
  • Taborda, R. and Bielak, J. (2014). Ground-Motion Simulation and Validation of the 2008 Chino Hills, California, Earthquake Using Different Velocity Models. Bulletin of the Seismological Society of America. Submitted for publication. SCEC Contribution 6048
  • Trugman, D. T. and E. M. Dunham (2014), A 2D pseudo-dynamic rupture model generator for earthquakes on geometrically complex faults, Bulletin of the Seismological Society of America, 104(1), 95-112, doi:10.1785/0120130138. SCEC Contribution 1763
  • Withers, K., K. Olsen & S. Day (2014). High-complexity Deterministic Q(f) Simulation of the 1994 Northridge Mw 6.7 Earthquake, SCEC Annual meeting, Palm Springs, Sept 6-10. SCEC Contribution 6049
  • Wang, F. & T Jordan (2014), Comparison of Probabilistic Seismic Hazards Models Using Averaging Based Factorization, BSSA, 104(3), 1230-1257, June 2014. SCEC Contribution 1807
  • Yoon, C. E., O. O’Reilly, K. Bergen, and G. C. Beroza (2015), Earthquake detection through computationally efficient similarity search, Science Advances, (submitted). SCEC Contribution 6016