Exciting news! We're transitioning to the Statewide California Earthquake Center. Our new website is under construction, but we'll continue using this website for SCEC business in the meantime. We're also archiving the Southern Center site to preserve its rich history. A new and improved platform is coming soon!

Two Decades of High-Performance Computing at SCEC

SCEC’s supercomputer allocations from the Department of Energy (DOE) and the National Science Foundation (NSF) over the last twenty years. While early allocations were large for the time, they are dwarfed by more more recent awards.

SCEC began its journey into high performance computing in 2001, when Thomas Jordan, former Director of SCEC, received a $10 million grant from the National Science Foundation (NSF) to use advanced computing methods for earthquake science and prediction.

“Instead of continuing on as a science center, I proposed that we make SCEC into an earthquake system science center that uses advanced computation to model earthquake processes,” said Jordan.

The NSF grant allowed SCEC and affiliated scientists to collaborate on simulations together as a community. Some seismologists had been using supercomputers at the time, but not in a systematic way. SCEC paved the way for collaborative use of supercomputers that greatly advanced earthquake research.

Earlier this year, the San Diego Supercomputer Center became one of the core institutions of SCEC and the two organizations are now also collaborating as a new oneAPI Center of Excellence focusing on earthquake research. 

“SCEC has partnered with SDSC for nearly 20 years, creating increasingly realistic simulations of earthquake phenomena that provide new insights to scientists, guide risk reduction and motivate the public to prepare,” said SCEC Director Yehuda Ben-Zion in an interview with UC San Diego Today.

SCEC has also collaborated with organizations like the Argonne Leadership Computing Facility (ALCF), the Oak Ridge Leadership Computing Facility (OLCF), the National Center for Supercomputing Applications (NCSA), the Texas Advanced Supercomputing Center (TACC), and others, to analyze the potential effects of major earthquakes through virtual simulations and predicting what outcomes these would have in the real world. 

Christine Goulet, SCEC’s former executive director for applied science, said in an interview with Hewlett-Packard, “It’s not a prediction of an earthquake––it’s a predictive model of how an earthquake could impact a region in terms of shaking.”

Over the years, SCEC’s HPC work has been recognized through several awards including Department of Energy Scientific Visualization awards in 2009 and 2011, an HPC Gordon Bell finalist award in 2010, an IDC HPC Innovation Excellence Award in 2013, and an HPCwire Editor’s Choice Award for Best Use of HPC in Physical Sciences in 2021.

What is High Performance Computing (HPC)?

High performance computing is much like regular computing, except it utilizes supercomputers or computer clusters, either which can generally process large datasets faster than a regular computer.

Supercomputers are systems of multiple interconnected computers or processors that can solve problems in parallel. Parallel processing divides a large problem into several smaller parts that are independent of each other. This way, each processor can run a part of the problem, which greatly speeds up the process.

“It allows us to tackle large problems in reasonable amounts of time,” said Kevin Milner, a computer scientist and geophysicist at SCEC. “Big problems that would take 10 years to solve on your individual machine could be done in a day on a really large computer.”

SCEC’s use of HPC

Large earthquakes, like the 1906 San Francisco earthquake, do not happen frequently enough for observation, but cause billions of dollars in damage and risk countless lives. Even though paleoseismic studies and historical records have evidence of past earthquakes, they are limited, which makes it difficult to predict future events.

“We’d like seismology to be an observational science, but particularly damaging earthquakes are so rare and we have so few observations, so modeling and simulations are valuable to help us build an understanding,” said Philip Maechling, Associate Director for Information Technology at SCEC.

“The next best thing we can do is run that experiment on a supercomputer where we can simulate lots of earthquakes that we think are likely to happen in the future,” said Scott Callaghan, a computer scientist at SCEC.

SCEC researchers developed the Rate-State Earthquake Simulator (RSQSim), a regional scale earthquake simulator that uses the physics behind fault systems to simulate earthquakes, keeping in mind processes that regulate the time, place and extent of fault slip. These simulations are used to create earthquake rupture forecasts that show the probability of earthquakes in a given region or fault.

“The only way we can achieve these type of simulations is by using high-performance computers,” said Goulet in an interview with Dell Technologies. “Most of our computational research at SCEC is driven by lots of researchers working together, and we need a fast turnaround to be able to discuss the results and improve them.”

“We deal with processes that are happening inside the ground that we can’t observe, so we have to make a lot of inferences about what’s going on in the earth,” said Milner. “Setting up physics based simulations is a way for us to test hypotheses about how earthquakes nucleate and spread.”

Leader in supercomputer allocations

SCEC has been one of the most sought-after research centers in the natural sciences for supercomputers. These supercomputing centers are always looking for research groups who can perform important and innovative research using their supercomputers, and SCEC is an ideal candidate because of the highly skilled computational researchers within SCEC, and because of the socially significant earthquake research they perform.

Computing time on Department of Energy (DOE) and NSF supercomputers is awarded through a competitive process called supercomputer allocation requests. SCEC writes allocation requests that describe its research goals and that show that its research software will run efficiently on the available supercomputers.

To facilitate comparisons of computing time on different types of computers, supercomputer allocations are often described in Service Units (SUs). Service Units are a standardized measure of computing time that is approximately equal to one hour of computing time on one modern central processing unit (CPU).

In recent years, SCEC has been awarded, and used, hundreds of millions of SUs for its seismological research program, as shown in the chart at the top of this article. For example, DOE’s INCITE Program awarded SCEC 141 million supercomputing hours in 2017 for “Quantification of Uncertainty in Seismic Hazard Using Physics-Based Simulations.” 

HPC and Seismic Hazard Analysis

Kevin Milner and Scott Callaghan were a part of the team that created a new framework for probabilistic seismic hazard analysis (PSHA) that uses simulations from RSQSim to produce 3D ground motion simulations. These ground motion simulation models are used to create likely scenarios of shaking experienced during an earthquake by a given region.

Seismic hazard calculations are used in California building codes to determine the strength of shaking that buildings need to withstand.

“Complex systems like gas lines, roads, bridges, electrical distribution systems and aqueducts––those are the ones we are especially concerned about,” said Goulet in her interview with Hewlett-Packard. “If [Southern California] is suddenly cut off from its water supply, that’s an event that would cause a lot of suffering, far beyond the shaking and displacement in the area of the fault itself.”

“We create seismic hazard maps which give you an idea of where shaking is likely to be the strongest during an earthquake and where things may be damaged by the shaking,” said Maechling.

 

CyberShake Study 21.12 Southern California hazard map for 2 second period and RotD50 ground motion, with 2% chance of exceedance in 50 years.  Warm colors represent areas of higher hazard.

 

Perhaps one of the biggest achievements for SCEC was bringing the seismology community together to run simulations and seismic hazard analyses, similar to how supercomputers run programs in parallel.

An early and well-known example of this collaboration was the 2008 ShakeOut Scenario, a USGS-led project where hundreds of scientists simulated a large earthquake on the Southern San Andreas to visualize its consequences in Southern California. The scenario incorporated many of SCEC’s HPC simulations to calculate potential ground motion from the earthquake.

“That was an important exercise to compare the results from all their different codes and understand the assumptions that affect the outputs,” said Milner.

 

Snapshot of ground velocity 90 seconds after the rupture begins, according to three independent physics-based models (courtesy of G. Ely, R. Graves, J. Bielak, and K. Olsen; Southern California Earthquake Center)

 

About the Author

Shreya Agrawal is an earth scientist and a journalist focusing on climate change, environmental and social issues and politics. She graduates from USC in 2023 with a dual bachelor's degree in Geological Sciences and English, and a masters degree in journalism. She hopes to better communicate science to the public and bridge gaps in science communication.