A Philosopher’s Look at Model-Tuning
Thursday, 18 December 2014: 2:40 PM
Model tuning is unavoidable in climate models. This raises the question whether data used in tuning or calibration can also be used in evaluating a model’s performance or skill. In the philosophical literature this question is discussed as the problem of old evidence: is a model more highly confirmed by novel evidence predicted by the model or is evidence that is accommodated by the model during model construction equally as confirmatory of the model? In this paper I present several conditions under which a weak predictivism holds—conditions under which predictive success is more highly confirmatory of a model’s empirical performance than mere accommodation—and argue that these conditions are met in the case of climate modeling. In particular, I argue that predictive success can be evidence that a model has certain ‘good-making’ features that are ‘epistemically opaque’—that is, the presence of which is difficult to detect otherwise. I also propose a Bayesian formulation of the predictivist thesis.