H51J-0736:
Uncertainty in Measured Data and Model Predictions: Essential Components for Mobilizing Environmental Data and Modeling

Friday, 19 December 2014
Daren Harmel, USDA-ARS, Temple, TX, United States
Abstract:
In spite of pleas for uncertainty analysis - such as Beven’s (2006) “Should it not be required that every paper in both field and modeling studies attempt to evaluate the uncertainty in the results?” - the uncertainty associated with hydrology and water quality data is rarely quantified and rarely considered in model evaluation. This oversight, justified in the past by mainly tenuous philosophical concerns, diminishes the value of measured data and ignores the environmental and socio-economic benefits of improved decisions and policies based on data with estimated uncertainty. This oversight extends to researchers, who typically fail to estimate uncertainty in measured discharge and water quality data because of additional effort required, lack of adequate scientific understanding on the subject, and fear of negative perception if data with “high” uncertainty are reported; however, the benefits are certain. Furthermore, researchers have a responsibility for scientific integrity in reporting what is known and what is unknown, including the quality of measured data.

In response we produced an uncertainty estimation framework and the first cumulative uncertainty estimates for measured water quality data (Harmel et al., 2006). From that framework, DUET-H/WQ was developed (Harmel et al., 2009). Application to several real-world data sets indicated that substantial uncertainty can be contributed by each data collection procedural category and that uncertainties typically occur in order discharge < sediment < dissolved N and P < total N and P.

Similarly, modelers address certain aspects of model uncertainty but ignore others, such as the impact of uncertainty in discharge and water quality data. Thus, we developed methods to incorporate prediction uncertainty as well as calibration/validation data uncertainty into model goodness-of-fit evaluation (Harmel and Smith, 2007; Harmel et al., 2010). These enhance model evaluation by: appropriately sharing burden with “data providers”; facilitating more realistic model performance evaluation; better determining model deficiencies (e.g., where simulations do not fall within the uncertainty range of measured data); and more accurately communicating model performance.