Published in

De Gruyter, Clinical Chemistry and Laboratory Medicine, 5(62), p. 900-910, 2023

DOI: 10.1515/cclm-2023-0847

Links

Tools

Export citation

Search in Google Scholar

Evaluation of five multisteroid LC‒MS/MS methods used for routine clinical analysis: comparable performance was obtained for nine analytes

Distributing this paper is prohibited by the publisher
Distributing this paper is prohibited by the publisher

Full text: Unavailable

Red circle
Preprint: archiving forbidden
Red circle
Postprint: archiving forbidden
Orange circle
Published version: archiving restricted
Data provided by SHERPA/RoMEO

Abstract

Abstract Objectives A mass spectrometry (LC‒MS/MS)-based interlaboratory comparison study was performed for nine steroid analytes with five participating laboratories. The sample set contained 40 pooled samples of human serum generated from preanalyzed leftovers. To obtain a well-balanced distribution across reference intervals of each steroid, the leftovers first underwent a targeted mixing step. Methods All participants measured a sample set once using their own multianalyte protocols and calibrators. Four participants used in-house developed measurement platforms, including IVD-CE certified calibrators, which were used by three participants; the 5th lab used the whole LC‒MS kit from an IVD manufacturer. All labs reported results for 17-hydroxyprogesterone, androstenedione, cortisol, and testosterone, and four labs reported results for 11-deoxycortisol, corticosterone, cortisone, dehydroepiandrosterone sulfate (DHEAS), and progesterone. Results Good or acceptable overall comparability was found in Bland‒Altman and Passing‒Bablok analyses. Mean bias against the overall mean remained less than ±10 % except for DHEAS, androstenedione, and progesterone at one site and for cortisol and corticosterone at two sites (max. −18.9 % for androstenedione). The main analytical problems unraveled by this study included a bias not previously identified in proficiency testing, operator errors, non-supported matrix types and higher inaccuracy and imprecision at lower ends of measuring intervals. Conclusions This study shows that intermethod comparison is essential for monitoring the validity of an assay and should serve as an example of how external quality assessment could work in addition to organized proficiency testing schemes.