Content area
Full text
MesoVICT focuses on the application, capability, and enhancement of spatial verification methods as applied to deterministic and ensemble forecasts of precipitation, wind, and temperature over complex terrain and includes observation uncertainty assessment.
As numerical weather prediction (NWP) models began to increase considerably in resolution, it became clear that traditional gridpoint-by-gridpoint verification methods did not provide adequate or sufficient diagnostic information about forecast performance for some users (e.g., Mass et al. 2002). Double penalties arising from spatial displacement (or perhaps timing) errors and more rapid growth of small-scale errors often result in poorer performance scores for higher-resolution forecasts than their coarser counterparts, even when subjective evaluation would support the higher-resolution models as being better. Subsequently, a host of new verification methods were developed very rapidly (Ebert and McBride 2000; Harris et al. 2001; Casati et al. 2004; Nachamkin 2004; Davis et al. 2006; Keil and Craig 2007; Roberts and Lean 2008; Marzban et al. 2009; Gilleland et al. 2010b), which we will refer to as spatial methods. There still are gaps in our understanding when it comes to interpreting what all the new spatial methods tell us; gaining an in-depth understanding of forecast performance depends on grasping the full meaning of the verification results. Furthermore, the investment required to implement a new spatial method is relatively high compared to traditional verification methods, making it important to have some criteria on which to decide which methods would best suit a particular user’s need(s). Therefore, a spatial methods meta-verification, or intercomparison, project was created to try to answer some of these questions.
The first spatial verification methods intercomparison project (ICP; Gilleland et al. 2009, 2010a) was initiated in 2007 with the aim of better understanding the rapidly increasing literature concerning new spatial verification methods, and several questions were addressed:
*
How does each method inform about forecast performance overall?
*
Which aspects of forecast error does each method best identify (e.g., location error, scale dependence of the skill, etc.)?
*
Which methods yield identical information to each other and which methods provide complementary information?
The aim of the ICP was to analyze the behavior and provide a structured and systematic cataloging of the existing spatial verification methods, with the final goal of providing guidance to the users on...





