Diagnostic Evaluation of Nmme Precipitation and Temperature Forecasts for the Continental United States

Thursday, 18 December 2014
Gregory S Karlovits1, Gabriele Villarini1, Allen Bradley1 and Gabriel Andres Vecchi2, (1)IIHR—Hydroscience and Engineering, Iowa City, IA, United States, (2)Geophysical Fluid Dynamics Laboratory, Princeton, NJ, United States
Forecasts of seasonal precipitation and temperature can provide information in advance of potentially costly disruptions caused by flood and drought conditions. The consequences of these adverse hydrometeorological conditions may be mitigated through informed planning and response, given useful and skillful forecasts of these conditions. However, the potential value and applicability of these forecasts is unavoidably linked to their forecast quality.

In this work we evaluate the skill of four global circulation models (GCMs) part of the North American Multi-Model Ensemble (NMME) project in forecasting seasonal precipitation and temperature over the continental United States. The GCMs we consider are the Geophysical Fluid Dynamics Laboratory (GFDL)-CM2.1, NASA Global Modeling and Assimilation Office (NASA-GMAO)-GEOS-5, The Center for Ocean-Land-Atmosphere Studies – Rosenstiel School of Marine & Atmospheric Science (COLA-RSMAS)-CCSM3, Canadian Centre for Climate Modeling and Analysis (CCCma) - CanCM4. These models are available at a resolution of 1-degree and monthly, with a minimum forecast lead time of nine months, up to one year. These model ensembles are compared against gridded monthly temperature and precipitation data created by the PRISM Climate Group, which represent the reference observation dataset in this work.

Aspects of forecast quality are quantified using a diagnostic skill score decomposition that allows the evaluation of the potential skill and conditional and unconditional biases associated with these forecasts. The evaluation of the decomposed GCM forecast skill over the continental United States, by season and by lead time allows for a better understanding of the utility of these models for flood and drought predictions. Moreover, it also represents a diagnostic tool that could provide model developers feedback about strengths and weaknesses of their models.