Dissemin is shutting down on January 1st, 2025

Published in

American Association of Neurological Surgeons, Journal of Neurosurgery: Spine, 5(34), p. 779-787, 2021

DOI: 10.3171/2020.8.spine20963

Links

Tools

Export citation

Search in Google Scholar

Utility of prediction model score: a proposed tool to standardize the performance and generalizability of clinical predictive models based on systematic review

Distributing this paper is prohibited by the publisher
Distributing this paper is prohibited by the publisher

Full text: Unavailable

Red circle
Preprint: archiving forbidden
Red circle
Postprint: archiving forbidden
Question mark in circle
Published version: policy unknown
Data provided by SHERPA/RoMEO

Abstract

OBJECTIVE The objective of this study was to evaluate the characteristics and performance of current prediction models in the fields of spine metastasis and degenerative spine disease to create a scoring system that allows direct comparison of the prediction models. METHODS A systematic search of PubMed and Embase was performed to identify relevant studies that included either the proposal of a prediction model or an external validation of a previously proposed prediction model with 1-year outcomes. Characteristics of the original study and discriminative performance of external validations were then assigned points based on thresholds from the overall cohort. RESULTS Nine prediction models were included in the spine metastasis category, while 6 prediction models were included in the degenerative spine category. After assigning the proposed utility of prediction model score to the spine metastasis prediction models, only 1 reached the grade of excellent, while 2 were graded as good, 3 as fair, and 3 as poor. Of the 6 included degenerative spine models, 1 reached the excellent grade, while 3 studies were graded as good, 1 as fair, and 1 as poor. CONCLUSIONS As interest in utilizing predictive analytics in spine surgery increases, there is a concomitant increase in the number of published prediction models that differ in methodology and performance. Prior to applying these models to patient care, these models must be evaluated. To begin addressing this issue, the authors proposed a grading system that compares these models based on various metrics related to their original design as well as internal and external validation. Ultimately, this may hopefully aid clinicians in determining the relative validity and usability of a given model.