Published in

American Meteorological Society, Monthly Weather Review, 12(136), p. 5162-5182, 2008

DOI: 10.1175/2008mwr2551.1

Links

Tools

Export citation

Search in Google Scholar

Probabilistic Verification of Monthly Temperature Forecasts

This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Orange circle
Published version: archiving restricted
Data provided by SHERPA/RoMEO

Abstract

Abstract Monthly forecasting bridges the gap between medium-range weather forecasting and seasonal predictions. While such forecasts in the prediction range of 1–4 weeks are vital to many applications in the context of weather and climate risk management, surprisingly little has been published on the actual monthly prediction skill of existing global circulation models. Since 2004, the European Centre for Medium-Range Weather Forecasts has operationally run a dynamical monthly forecasting system (MOFC). It is the aim of this study to provide a systematic and fully probabilistic evaluation of MOFC prediction skill for weekly averaged forecasts of surface temperature in dependence of lead time, region, and season. This requires the careful setup of an appropriate verification context, given that the verification period is short and ensemble sizes small. This study considers the annual cycle of operational temperature forecasts issued in 2006, as well as the corresponding 12 yr of reforecasts (hindcasts). The debiased ranked probability skill score (RPSSD) is applied for verification. This probabilistic skill metric has the advantage of being insensitive to the intrinsic unreliability due to small ensemble sizes—an issue that is relevant in the present context since MOFC hindcasts only have five ensemble members. The formulation of the RPSSD is generalized here such that the small hindcast ensembles and the large operational forecast ensembles can be jointly considered in the verification. A bootstrap method is applied to estimate confidence intervals. The results show that (i) MOFC forecasts are generally not worse than climatology and do outperform persistence, (ii) MOFC forecasts are skillful beyond a lead time of 18 days over some ocean regions and to a small degree also over tropical South America and Africa, (iii) extratropical continental predictability essentially vanishes after 18 days of integration, and (iv) even when the average predictability is low there can nevertheless be climatic conditions under which the forecasts contain useful information. With the present model, a significant skill improvement beyond 18 days of integration can only be achieved by increasing the averaging interval. Recalibration methods are expected to be without effect since the forecasts are essentially reliable.