S11G-04:
Global Adjoint Tomography: Combining Big Data with HPC Simulations

Monday, 15 December 2014: 8:45 AM
Ebru Bozdag1, Matthieu Philippe Lefebvre2, Wenjie Lei2, Daniel B Peter3, James A Smith2, Dimitri Komatitsch4 and Jeroen Tromp2,5, (1)University of Nice Sophia Antipolis, Geoazur, Valbonne, France, (2)Princeton University, Geosciences, Princeton, NJ, United States, (3)ETH Zurich, Institute of Geophysics, Zurich, Switzerland, (4)CNRS/University of Aix-Marseille, Laboratory of Mechanics and Acoustics, Marseille, France, (5)Princeton University, Applied and Computational Mathematics, Princeton, NJ, United States
Abstract:
The steady increase in data quality and the number of global seismographic stations have substantially grown the amount of data available for construction of Earth models. Meanwhile, developments in the theory of wave propagation, numerical methods and HPC systems have enabled unprecedented simulations of seismic wave propagation in realistic 3D Earth models which lead the extraction of more information from data, ultimately culminating in the use of entire three-component seismograms.
Our aim is to take adjoint tomography further to image the entire planet which is one of the extreme cases in seismology due to its intense computational requirements and vast amount of high-quality seismic data that can potentially be assimilated in inversions. We have started low resolution (T > 27 s, soon will be > 17 s) global inversions with 253 earthquakes for a transversely isotropic crust and mantle model on Oak Ridge National Laboratory’s Cray XK7 "Titan" system. Recent improvements in our 3D solvers, such as the GPU version of the SPECFEM3D_GLOBE package, will allow us perform higher-resolution (T > 9 s) and longer-duration (~180 m) simulations to take the advantage of high-frequency body waves and major-arc surface waves to improve imbalanced ray coverage as a result of uneven distribution of sources and receivers on the globe. Our initial results after 10 iterations already indicate several prominent features reported in high-resolution continental studies, such as major slabs (Hellenic, Japan, Bismarck, Sandwich, etc.) and enhancement in plume structures (the Pacific superplume, the Hawaii hot spot, etc.). Our ultimate goal is to assimilate seismic data from more than 6,000 earthquakes within the magnitude range 5.5 ≤ Mw ≤ 7.0. To take full advantage of this data set on ORNL’s computational resources, we need a solid framework for managing big data sets during pre-processing (e.g., data requests and quality checks), gradient calculations, and post-processing (e.g., pre-conditioning and smoothing gradients) where we address the bottlenecks in our global seismic workflow based on ORNL’s ADIOS libraries. We will present our “first generation” model, discuss challenges and future directions in global seismology.