A33A-0122
Enabling Efficient Climate Science Workflows in High Performance Computing Environments

Wednesday, 16 December 2015
Poster Hall (Moscone South)
Hari Krishnan1, Surendra Byna1, Michael F Wehner1, Junmin Gu1, Travis Allen O'Brien1, Burlen Loring1, Dáithí A Stone1, William Collins1, Mr Prabhat1, Yunjie Liu1, Jeffrey N Johnson1 and Christopher J Paciorek2, (1)Lawrence Berkeley National Laboratory, Berkeley, CA, United States, (2)University of California, Berkeley, CA, United States
Abstract:
A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community.

As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.