Dissemin is shutting down on January 1st, 2025

Published in

Elsevier, Ecological Modelling, 6(221), p. 960-964

DOI: 10.1016/j.ecolmodel.2009.12.003

Links

Tools

Export citation

Search in Google Scholar

A proposal of an indicator for quantifying model robustness based on the relationship between variability of errors and of explored conditions

Journal article published in 2010 by R. Confalonieri, S. Bregaglio ORCID, M. Acutis
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Red circle
Postprint: archiving forbidden
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

The evaluation of biophysical models is usually carried out by estimating the agreement between measured and simulated data and, more rarely, by using indices for other aspects, like model complexity and overparameterization. In spite of the importance of model robustness, especially for large area applications, no proposals for its quantification are available. In this paper, we would like to open a discussion on this issue, proposing a first approach for a quantification of robustness based on the variability of model error to variability of explored conditions ratio. We used modelling efficiency (EF) for quantifying error in model predictions and a normalized agrometeorological index (SAM) based on cumulated rainfall and reference evapotranspiration to characterize the conditions of application. Population standard deviations of EF and SAM were used to quantify their variability. The indicator was tested for models estimating meteorological variables and crop state variables. The values provided by the robustness indicator (IR) were discussed according to the models’ features and to the typology and number of processes simulated. IR increased with the number of processes simulated and, within the same typology of model, with the degree of overparameterization. No correlation were found between IR and two of the most used indices of model error (RRMSE, EF). This supports its inclusion in integrated systems for model evaluation.