Evolving the Data Management Workflow of GLODAP

Kevin O'Brien, University of Washington Seattle Campus, JISAO, Seattle, WA, United States, Benjamin Pfeil, Bjerknes Climate Data Centre, University of Bergen, Bergen, Norway, Steve Jones, University of Exeter, Exeter, United Kingdom, Eugene F Burger, NOAA, Boulder, United States, Stephen C Diggs, Univ. of California San Diego, La Jolla, United States, Jonas F Henriksen, University of Bergen, Geophysical Institute, Bergen, Norway, Karl Matthew Smith, JISAO, Univ. of Washington, Seattle, WA, United States and Linus Kamb, UW/Cooperative Institute for Climate, Ocean, & Ecosystem Studies NOAA/PMEL, Seattle, United States
The Global Ocean Data Analysis Project (GLODAP) is a cooperative effort to coordinate global synthesis projects funded through various funding agencies. Cruises conducted as part of the World Ocean Circulation Experiment (WOCE), Joint Global Ocean Flux Study( JGOFS), and the NOAA Ocean-Atmosphere Exchange Study (OACES) over the last 5 decades have created an oceanographic database of unparalleled quality and quantity. The central objective of this project is to generate a unified data set to help determine the global distributions of both natural and anthropogenic inorganic carbon, including radiocarbon. These measurements provide an important benchmark against which future observational studies will be compared. They also provide tools for the direct evaluation of numerical ocean carbon models.

However, as data volumes increase and key personnel in GLODAP data management retire, it is crucial to modernize the workflow that underpins the scientifically important GLODAP dataset. The time between GLODAP v1 and v2 releases was 12 years. Recognizing that such a gap is not acceptable, the GLODAP data team, which includes collaboration between NOAA, the University of Bergen and the CLIVAR and Carbon Hydrographic Data Office(CCHDO), is working to provide more frequent releases. In order to do this, GLODAP is looking to implement its own version of the successful SOCAT data automation workflow.

In this presentation, we will describe in detail how the GLODAP process will build on the SOCAT framework to provide data ingestion, verification, access and archival in support of FAIR data principles. We will discuss our plan to provide a simple method for data and metadata submission to data assembly centers in a way that reduces the burden of data management on scientists. Metadata and data will be interactively and programmatically validated during the submission process, and transformed into standards compliant formats suitable for user access, integration into synthesis products and archival.