H33D-1649
Just How Accurate are Your Probabilistic Forecasts? Improving Forecast Quality Assessment in the Presence of Sampling Uncertainty

Wednesday, 16 December 2015
Poster Hall (Moscone South)
Tae-Ho Kang1, Ashish Sharma2 and Lucy Amanda Marshall1, (1)University of New South Wales, Sydney, NSW, Australia, (2)University of New South Wales, School of Civil and Environmental Engineering, Sydney, NSW, Australia
Abstract:
Use of ensemble forecasts as a means of characterising predictive uncertainty has become increasingly common in hydrological and meteorological forecasting. The needs to characterize ensemble forecast quality has encouraged the development of reliable verification tools. Most of the metrics used currently are related to the Brier score, first proposed in 1950. However, the Brier score and its alterations including the decomposition of the Brier score, as well as the Ranked Probability Score, have paid little attention to the difference in the characteristics of the forecasted and sampled probability distributions. This difference, or the error in the probability distribution, can lead to a bias in all existing metrics derived from the Brier score. Similar biases arise where the second moment is different to that observed, or when the observations are scarce and hence difficult to characterise. Therefore, this study suggests simple and reliable measures for the first and second moment bias of the forecasted ensemble and in addition, approaches to analytically estimate the sampling uncertainty of the proposed measures. The proposed approaches are tested through synthetically generated hydrologic forecasts and observations, as well as seasonal forecasts of the El Nino Southern Oscillation issued by the International research Institute for Climate and Society (IRI-ENSO). The results show that the estimated uncertainty range of the first and second moment bias can accurately represent the sampling error under most circumstances in a real forecasting system.