The aim of this project is to develop new spectral computational methods for fully inertial, continuum elastodynamic, rupture analyses. The methods are needed for studies of earthquake nucleation, rupture mode, event sequences and population statistics in ways that can resolve current debates in earthquake source theory. The methods are employed for some studies reported in the companion project on Elastodynamic simulations of rupture propagation and earthquake sequences along complex fault systems. The focus here is on the new computational methodology.
(1) Algorithm developments in procedure for calculating the elastodynamic response of planar fault zones that are subjected to slow loading processes of long duration: Our aim is to do fully elastodynamic treatments of earthquake sequences, not only for the dynamic propagation of rupture but also for the vastly longer loading processes leading up to instability, so that nucleation is properly treated and there are no artifacts introduced in the transition from static to dynamic calculation procedures. Rice and Ben-Zion (PNAS, 1996) and Ben-Zion and Rice (JGR, 1997) have shown how such an approach could be devised to allow the dynamic treatment of a series of earthquakes, each taking a few seconds, during the simulation of hundreds to a thousand years of loading of a depth-variable fault model discretized into 64 to 128 elements. Our aim has been to vastly improve the algorithms so that, with the use of rapid parallel computers (like CM-5 or SGI Origin 2000), we could address similar problems for far larger 2D problems and for 3D problems.
Much progress towards this goal has been achieved. In our simulations we represent failure as development of a displacement discontinuity along a plane separating homogeneous elastic half spaces and use spectral representation of the relation between stress and displacement on the rupture plane, exactly solving the equations of elastodynamics in the adjoining solids. The contribution of prior displacement to stress in such a representation can be separated into a static elastic term and a dynamic term accounting for the wave-mediated components of stress transfer. The dynamic term takes the form of a sum of time convolutions for the various spectral (Fourier) modes. To make long-time simulations possible, truncation of the convolution integrals has been used. Two time domains have been introduced to the procedure: (1) a long open-ended "evolution" domain in which the field values are calculated using variable time steps governed by constitutive laws, and (2) a short fixed-size "elastodynamic" time window in which the dynamic effects are calculated using constant "elastodynamic" time stepping dictated by scaling requirements and wave traverse time for a cell. Variable time stepping in the evolution time domain allows to capture the details of rapid episodes by taking tiny time steps during failure, as well as to span long periods in between failure episodes by taking large time steps there.
We have learned that for the variability of the evolution time stepping to be efficient, in the sense that evolution time steps in the essentially static phases of the calculations can be chosen to be vastly (up to 1010 times) longer than elastodynamic time steps or times for waves to traverse elements, the constitutive law has to include an instantaneous positive dependence on slip velocity just as the lab derived rate- and state- dependent friction laws do. Without the instantaneous dependence, which happily is present for real slipping surfaces, the stability of the algorithm requires all evolution time steps to be of the order of elastodynamic time steps, and makes processes of long duration impossible to simulate with the current procedure.
Even with the truncation procedure, calculating the dynamic term contribution is very challenging computationally (it may take more than 99% of the cpu time) and has to be done as efficiently as possible to allow for simulations with physically justifiable values of parameters. Utilization of two time domain brings the problem of mapping between them, which is crucial for the performance of the procedure, since any mapping scheme requires cpu time proportional to Nelem , where Nelem is the number of elements in the space discretization and is the number of elastodynamic time steps in the convolution time window. The straightforward searching and interpolating scheme at each evolution time step makes the mapping much more costly (ten times more in one of our examples) than convolution itself in terms of computational time, and such mapping was the major bottleneck in the implementation of the methodology by Rice and Ben-Zion (PNAS, 1996) and Ben-Zion and Rice (JGR, 1997). Fortunately, we have arrived at a very efficient solution to the mapping problem by taking all the time steps, including the elastodynamic time step, to be integer multiples of the minimum possible value of the evolution time stepping . We store the relevant field values on a uniform grid of spacing , linearly interpolating values between the ones actually occurring in the evolutionary set, so that the interpolation has to be done just once. This approach reduces mapping to just taking field values for the convolution from an equally-spaced subset of the array of stored values, making the cpu time required many times smaller. It also simplifies greatly the updating of the field values array.
A much more efficient scheme has been developed for the truncation itself, in which the "elastodynamic" time window no longer has to be the same for every Fourier mode. In this new scheme, the size of the "elastodynamic" time window can be specified for each Fourier mode individually, which allows great flexibility in truncating the convolutions. This approach turns out to be extremely advantageous for the physical problems under consideration, in which, due to the dependence of the kernel argument on the mode number, much smaller time windows can be taken to compute the contribution of higher Fourier modes, and thus much less field data has to be saved and convoluted at each time step. The special management of field data computation and storage developed complexifies the algorithm, but results in enormous savings in cpu time and memory required. In one of the examples studied, switching from the previously adopted scheme to the new one resulted in storing and convoluting arrays of the size (65536, 37), rather than (65536, 65536), as was needed previously.
The algorithm advances described have been successfully implemented in the studies of model earthquake sequences in the 2D crustal plane model discussed in the report on a companion project, Elastodynamic simulations of rupture propagation and earthquake sequences along complex fault systems (Figure 1). The improved technique will allow us to meet the computational challenges of investigating event populations in 3D problems as well as introducing much broader range of 2D problem features like smaller slip distances of the constitutive law, longer fault lengths, enhanced velocity weakening and several weakening mechanisms, the consideration of which was not computationally feasible with the previously employed scheme. The manuscript on the methodology is being completely redrafted to incorporate the current understanding of the procedure's applicability and efficient implementation.
(2) A new algorithm for the numerical evaluation of the convolution integral: In order to further decrease computer time spent in the evaluation of the time convolution, a new algorithm has been developed (Cochard 1997, to be submitted to BSSA), which can be used in conjunction with the methodology explained above. At each time step, and for each Fourier mode, one must evaluate the expression
,
where is the length of the truncated time window, V is the past velocity history, and K is the relevant elastodynamic kernel. This can be visualized in Figure 2, for the case = 16; in this
Figure 2: A new algorithm for the numerical evaluation of the convolution integral (see text).
table, each horizontal line represents 1 time step, starting from time step 38 and the numerical value of past for each time step is given as the summation of all terms on the corresponding line.
Hence, the total number of operations for a single Fourier mode is , where is the total number of time steps.
Let us first note that for some time step is one element of the convolution of the two arrays K and V. Hence, at first sight, computing by FFT methods using the convolution theorem would not be advantageous since it would require about operations (and would provide all other elements of the convolution, but which are not needed), instead of about only operations.
A more careful analysis shows, however, that the convolution theorem can nevertheless be advantageously used since a vast amount of these apparently useless convolution elements can be used to compute the values of the necessary elements for the next time steps. For our illustrative example, the computation of the full convolution (done using FFT methods) of K and V at time step 38, not only provides , but also all the partial summations that are located above the marked diagonal-like line in the table of Figure 2. Thus, for the next time step, the evaluation of will only require to add to the summation of all the terms on the left, which has just been evaluated, and so on.
In fact the process can be used recursively. I.e., after computing explicitly a few values of for the next time steps, one can compute another convolution by FFT methods which will provide additional "sections" of future values of . In the end, in Figure 2, the only parts which have to be computed explicitly are located inside the 4 three by three triangles that appear on the right part of the table. All the other terms are thus computed by FFT methods, each full convolution providing all the partial summations located in the different other areas of the table.
A detailed study shows that the total number of operations involved is of the order (instead of ). Specifically, in actual cases, the computational gain can be a factor of 30 or more.
(3) Spectral method without replication: a new way of evaluating the kernels: Cochard and Rice (J. Mech. Phys. Solids, 45, p. 1393, 1997) have developed a reformulation of the spectral methodology that rigorously offsets a major drawback of that method initially introduced by Perrin et al. and Geubelle and Rice, namely the fact that there was an infinite and periodic replication of the rupture events on the fracture plane, requiring to enlarge the computational domain in order to delay effects of the replications. The two formulations are identical in structure but, within the new formulation, the expressions of the convolution kernels (that are involved in the computation of the convolution integrals and which must be derived only once) are obtained through the numerical evaluation of an integral of highly oscillatory and badly behaved functions, whereas they are expressed in terms of Bessel functions for the classical spectral method. For realistically large simulations, these numerical evaluations turned out to be challenging and, up until recently, we had been able to develop an efficient way of computing these integrals only for the mode III case.
Recently, however, by revisiting an as yet unexploited part of out own work (Cochard and Rice, 1997), we found what seems to be a very efficient method (now tested only for a few cases) to compute these integrals. We are simply using alternative expressions for the integrals, still involving highly oscillatory functions but that now oscillate in a conventional way, allowing treatment with standard numerical procedures.
Publications Fully or Partially Supported by 1997 SCEC Studies
Papers:
Ben-Zion, Y. and J. R. Rice, Dynamic simulations of slip on a smooth fault in an elastic solid, J. Geophys. Res., vol. 102, pp. 17,771-17,784, 1997.
Cochard, A., and J. R. Rice, A spectral method for numerical elastodynamic fracture analysis without spatial replication of the rupture event, J. Mech. Phys. Solids, vol. 45, pp. 1393-1418, 1997.
Cochard, A., A new, faster, numerical method to evaluate the stress contribution of past slip history on a fault plane, involved in earthquake simulations, to be submitted to Bull. Seismol. Soc. Am., 1998.
Lapusta, N., G. Zheng, Y. Ben-Zion, J. Morrissey and J. R. Rice, Elastodynamic analysis for slow loading processes with episodes of rapid rupture on rate and state dependent faults, to be submitted to Bull. Seismol. Soc. Am., 1998.
Zheng, G., Dynamics of the earthquake source: An investigation of conditions under which velocity-weakening friction allows a self-healing versus crack-like mode of rupture, Ph.D. thesis, Harvard University, 1997.
Abstracts:
Cochard, A., and J. R. Rice, Mode of rupture, self-healing slip pulse versus enlarging shear crack, for a velocity-weakening fault in a 3D solid, EOS Trans. Amer. Geophys. Union, vol. 78, Fall Meeting Supplement, in press, 1997.
Lapusta, N., and J. R. Rice, Elastodynamic simulations of earthquake sequences in a 2D model of a faulted plate coupled to a moving substrate, EOS Trans. Amer. Geophys. Union, vol. 78, Fall Meeting Supplement, in press, 1997.
Rice, J. R., Slip pulse at low driving stress along a frictional fault between dissimilar media, EOS Trans. Amer. Geophys. Union, vol. 78, Fall Meeting Supplement, in press, 1997.