S51C-06:
Bayesian estimation of slip distribution based on von Karman autocorrelation

Friday, 19 December 2014: 9:15 AM
Andrew J Hooper, University of Leeds, COMET, School of Earth and Environment, Leeds, United Kingdom and David P Bekaert, University of Leeds, Leeds, LS2, United Kingdom
Abstract:
Geodetic observations from techniques such as InSAR and GNSS are routinely used to invert for earthquake fault slip distributions. However, in order to regularize the inversions, extra arbitrary assumptions about the smoothness of the slip distribution are usually included. In previous work we explored a new approach for constraining the slip distribution based on a random vector model following a von Karman autocorrelation function, which has empirical support from a stochastic analysis of seismic finite-source slip inversions.

We implemented the random vector constraint in a Bayesian fashion and used a Markov chain Monte Carlo (MCMC) algorithm to derive the posterior joint probability distribution for each of the slipping patches. The von Karman function depends on two parameters: correlation length and Hurst number (related to fractal dimension). In our inversions we used the empirically derived maximum likelihood values for these two parameters, which differ in along-strike and down-dip directions, and with fault mechanism.

However, the inversion results depend strongly on the chosen values for correlation length and Hurst number, and the empirically derived histograms show that there is in fact quite some variation between earthquakes with the same mechanism. In our extended approach we treat these two parameters as hyperparameters, with the prior probability distribution constrained by the empirical histograms. The values are thus also allowed to vary in our Bayesian inversion scheme. In this way, the uncertainty in the parameters that define the autocorrelation function is also included in the posterior probability distribution for the slipping patches.

To ensure that our MCMC algorithm converges rapidly, we have implemented a variation to the usual MCMC approach, in which the maximum step size for each of the model parameters is initially updated regularly, until optimal values are achieved.

In comparisons between our new approach and a more standard approach based on smoothing, we find the solutions with maximum posterior probability sometimes differ markedly. Even more significant, however, is the difference in the uncertainty of our results, with our new method leading to a solution with a much tighter constraint.