S53D-4548:
Can We Detect Clustered Megaquakes?

Friday, 19 December 2014
Eric G Daub, Center for Earthquake Research and Information, Memphis, TN, United States, Daniel Trugman, Los Alamos National Laboratory, Los Alamos, NM, United States and Paul A Johnson, Los Alamos National Laboratory, Earth and Environmental Sciences, Los Alamos, NM, United States
Abstract:
We study the ability of statistical tests to identify nonrandom features of synthetic global earthquake records. We construct a series of four synthetic catalogs containing various types of clustering, with each catalog containing 10000 events over 100 years with magnitudes above M = 6. We apply a suite of statistical tests used in the literature to each catalog in order to evaluate the ability of each test to identify the catalog as nonrandom. Our results show that detection ability is dependent on the quantity of data, the nature of the type of clustering in the catalog, and the specific signal used in the statistical test. Catalogs that exhibit a stronger time variation in the seismicity rate are generally easier to identify as nonrandom for a given background rate. We also show that in some cases, a test that fails to identify a catalog as nonrandom can have predictive power to bound the range of possible clustering strengths of a certain type. Our results help in the interpretation of the results of statistical tests on the global earthquake record since 1900.