Published in

Wiley, Oikos, 4(123), p. 385-388, 2013

DOI: 10.1111/j.1600-0706.2013.01073.x

Links

Tools

Export citation

Search in Google Scholar

Ecologists should not use statistical significance tests to interpret simulation model results

This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Orange circle
Postprint: archiving restricted
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Simulation models are widely used to represent the dynamics of ecological systems. A common question with such models is how changes to a parameter value or functional form in the model alter the results. Some authors have chosen to answer that question using frequentist statistical hypothesis tests (e.g. ANOVA). This is inappropriate for two reasons. First, p-values are determined by statistical power (i.e. replication), which can be arbitrarily high in a simulation context, producing minuscule p-values regardless of the effect size. Second, the null hypothesis of no difference between treatments (e.g. parameter values) is known a priori to be false, invalidating the premise of the test. Use of p-values is troublesome (rather than simply irrelevant) because small p-values lend a false sense of importance to observed differences. We argue that modelers should abandon this practice and focus on evaluating the magnitude of differences between simulations.