Maia, A. de H.N. and Meinke, H. and Lennos, S. and Stone, R. (2006) Inferential, non-parametric statistics to assess quality of probabilistic forecast systems. Monthly Weather Review, 135 (2). pp. 351-362.
Article Link(s): http://dx.doi.org/10.1175/MWR3291.1
Many statistical forecast systems are available to interested users. In order to be useful for decision-making, these systems must be based on evidence of underlying mechanisms. Once causal connections between the mechanism and their statistical manifestation have been firmly established, the forecasts must also provide some quantitative evidence of `quality’. However, the quality of statistical climate forecast systems (forecast quality) is an ill-defined and frequently misunderstood property. Often, providers and users of such forecast systems are unclear about what ‘quality’ entails and how to measure it, leading to confusion and misinformation. Here we present a generic framework to quantify aspects of forecast quality using an inferential approach to calculate nominal significance levels (p-values) that can be obtained either by directly applying non-parametric statistical tests such as Kruskal-Wallis (KW) or Kolmogorov-Smirnov (KS) or by using Monte-Carlo methods (in the case of forecast skill scores). Once converted to p-values, these forecast quality measures provide a means to objectively evaluate and compare temporal and spatial patterns of forecast quality across datasets and forecast systems. Our analysis demonstrates the importance of providing p-values rather than adopting some arbitrarily chosen significance levels such as p < 0.05 or p < 0.01, which is still common practice. This is illustrated by applying non-parametric tests (such as KW and KS) and skill scoring methods (LEPS and RPSS) to the 5-phase Southern Oscillation Index classification system using historical rainfall data from Australia, The Republic of South Africa and India. The selection of quality measures is solely based on their common use and does not constitute endorsement. We found that non-parametric statistical tests can be adequate proxies for skill measures such as LEPS or RPSS. The framework can be implemented anywhere, regardless of dataset, forecast system or quality measure. Eventually such inferential evidence should be complimented by descriptive statistical methods in order to fully assist in operational risk management.
|Additional Information:||Author version © Queensland Department of Primary Industries and Fisheries. Reproduced in accordance with the copyright policy of the publisher. © American Meteorological Society. Access to published version may be available via Publisher’s website.|
|Keywords:||Statistical; forecast; climate; rainfall; temperature.|
|Subjects:||Science > Science (General)|
Science > Statistics > Statistical data analysis
|Deposited On:||22 Jan 2008|
|Last Modified:||08 Jun 2011 23:36|
Repository Staff Only: item control page