Published in

Springer, Behavior Research Methods, 4(46), p. 1152-1166, 2014

DOI: 10.3758/s13428-013-0440-0

Links

Tools

Export citation

Search in Google Scholar

Weighting strategies in the meta-analysis of single-case studies

Journal article published in 2014 by Rumen Manolov, Georgina Guilera, Vicenta Sierra
This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Green circle
Preprint: archiving allowed
Orange circle
Postprint: archiving restricted
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Establishing the evidence base of interventions taking place in areas such as psychology and special education is one of the research aims of single-case designs, in conjunction with the aim of improving the well-being of participants in the studies. The scientific criteria for solid evidence focus on the internal and external validity of the studies, and for both types of validity, replicating studies and integrating the results of these replications (i.e., meta-analyzing) is crucial. In the present study, we deal with one of the aspects of meta-analysis-namely, the weighting strategy used when computing an average effect size across studies. Several weighting strategies suggested for single-case designs are discussed and compared in the context of both simulated and real-life data. The results indicated that there are no major differences between the strategies, and thus, we consider that it is important to choose weights with a sound statistical and methodological basis, while scientific parsimony is another relevant criterion. More empirical research and conceptual discussion are warranted regarding the optimal weighting strategy in single-case designs, alongside investigation of the optimal effect size measure in these types of designs.