Published in

National Academy of Sciences, Proceedings of the National Academy of Sciences, 23(120), 2023

DOI: 10.1073/pnas.2215572120

Links

Tools

Export citation

Search in Google Scholar

Competition and moral behavior: A meta-analysis of forty-five crowd-sourced experimental designs

Journal article published in 2023 by Christoph Huber ORCID, Anna Dreber ORCID, Jürgen Huber ORCID, Magnus Johannesson ORCID, Michael Kirchler ORCID, Utz Weitzel ORCID, Miguel Abellán ORCID, Xeniya Adayeva ORCID, Fehime Ceren Ay ORCID, Kai Barron ORCID, Zachariah Berry ORCID, Werner Bönte, Katharina Brütt ORCID, Muhammed Bulutay, Pol Campos-Mercade and other authors.
This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Red circle
Preprint: archiving forbidden
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Does competition affect moral behavior? This fundamental question has been debated among leading scholars for centuries, and more recently, it has been tested in experimental studies yielding a body of rather inconclusive empirical evidence. A potential source of ambivalent empirical results on the same hypothesis is design heterogeneity—variation in true effect sizes across various reasonable experimental research protocols. To provide further evidence on whether competition affects moral behavior and to examine whether the generalizability of a single experimental study is jeopardized by design heterogeneity, we invited independent research teams to contribute experimental designs to a crowd-sourced project. In a large-scale online data collection, 18,123 experimental participants were randomly allocated to 45 randomly selected experimental designs out of 95 submitted designs. We find a small adverse effect of competition on moral behavior in a meta-analysis of the pooled data. The crowd-sourced design of our study allows for a clean identification and estimation of the variation in effect sizes above and beyond what could be expected due to sampling variance. We find substantial design heterogeneity—estimated to be about 1.6 times as large as the average standard error of effect size estimates of the 45 research designs—indicating that the informativeness and generalizability of results based on a single experimental design are limited. Drawing strong conclusions about the underlying hypotheses in the presence of substantive design heterogeneity requires moving toward much larger data collections on various experimental designs testing the same hypothesis.