Improving the RTM imaging workflow with lossy compression on modern architectures

Monday, 15 December 2014
Eileen Rose Martin, Stanford University, Stanford, CA, United States
Reverse time migration (RTM) is one of the simplest seismic imaging algorithms to describe: a sum of correlations between forward propagated source wavefields and backward propagated receiver wavefields. To calculate these correlations, we store many wavefield snapshots during their propagation, sometimes as often as once every four time steps. This generates enough data that the snapshots must be written to disk. On modern architectures, we can do the wave propagation quickly on a GPU, but the interface between the GPU and its host CPU in addition to the very slow interface between the CPU and disk acts as a bottleneck in the overall imaging algorithm. 

We show that the bottleneck when writing to disk can be alleviated by compressing wavefield snapshots on the CPU before writing them to disk. Higher compression ratios can be achieved by the use of lossy compression, so we must balance information lost with reduced data movement time. We compare three strategies of compression: a general data compressor called fpzip, wavelets used in the JPEG 2000 standard, and curvelets, i.e. wavelets designed to fit contours such as wave fronts. Our experiments verify that the losses taken on by compressing wavefields by a significant factor as they are written to disk can be tolerated, presumably because RTM sums up many correlated wavefields which have uncorrelated errors induced by the compression. Our experiments show that lossy compression before writing to disk can reduce the time to produce an image via the RTM algorithm. These results are architecture dependent, so we further provide tools to help other researchers predict the compression timing needs of the RTM workflow on other computers.