Published in

Oxford University Press (OUP), GigaScience, 1(10), 2021

DOI: 10.1093/gigascience/giaa155

Links

Tools

Export citation

Search in Google Scholar

Understanding the impact of preprocessing pipelines on neuroimaging cortical surface analyses

This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Green circle
Published version: archiving allowed
Data provided by SHERPA/RoMEO

Abstract

Abstract Background The choice of preprocessing pipeline introduces variability in neuroimaging analyses that affects the reproducibility of scientific findings. Features derived from structural and functional MRI data are sensitive to the algorithmic or parametric differences of preprocessing tasks, such as image normalization, registration, and segmentation to name a few. Therefore it is critical to understand and potentially mitigate the cumulative biases of pipelines in order to distinguish biological effects from methodological variance. Methods Here we use an open structural MRI dataset (ABIDE), supplemented with the Human Connectome Project, to highlight the impact of pipeline selection on cortical thickness measures. Specifically, we investigate the effect of (i) software tool (e.g., ANTS, CIVET, FreeSurfer), (ii) cortical parcellation (Desikan-Killiany-Tourville, Destrieux, Glasser), and (iii) quality control procedure (manual, automatic). We divide our statistical analyses by (i) method type, i.e., task-free (unsupervised) versus task-driven (supervised); and (ii) inference objective, i.e., neurobiological group differences versus individual prediction. Results Results show that software, parcellation, and quality control significantly affect task-driven neurobiological inference. Additionally, software selection strongly affects neurobiological (i.e. group) and individual task-free analyses, and quality control alters the performance for the individual-centric prediction tasks. Conclusions This comparative performance evaluation partially explains the source of inconsistencies in neuroimaging findings. Furthermore, it underscores the need for more rigorous scientific workflows and accessible informatics resources to replicate and compare preprocessing pipelines to address the compounding problem of reproducibility in the age of large-scale, data-driven computational neuroscience.