Published in

De Gruyter, Scandinavian Journal of Pain, (9), p. 38-41

DOI: 10.1016/j.sjpain.2015.05.004

Links

Tools

Export citation

Search in Google Scholar

Reliability of pressure pain threshold testing in healthy pain free young adults

This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Red circle
Preprint: archiving forbidden
Red circle
Postprint: archiving forbidden
Orange circle
Published version: archiving restricted
Data provided by SHERPA/RoMEO

Abstract

Abstract Background and aims Investigation of the multidimensional correlates of pressure pain threshold (PPT) requires the study of large cohorts, and thus the use of multiple raters, for sufficient statistical power. Although PPT testing has previously been shown to be reliable, the reliability of multiple raters and investigation for systematic bias between raters has not been reported. The aim of this study was to evaluate the intrarater and interrater reliability of PPT measurement by handheld algometer at the wrist, leg, cervical spine and lumbar spine. Additionally the study aimed to calculate sample sizes required for parallel and cross-over studies for various effect sizes accounting for measurement error. Methods Five research assistants (RAs) each tested 20 pain free subjects at the wrist, leg, cervical and lumbar spine. Intraclass correlation coefficient (ICC), standard error of measurement (SEM) and systematic bias were calculated. Results Both intrarater reliability (ICC = 0.81–0.99) and interrater reliability (ICC = 0.92–0.95) were excellent and intrarater SEM ranged from 79 to 100 kPa. There was systematic bias detected at three sites with no single rater tending to consistently rate higher or lower than others across all sites. Conclusion The excellent ICCs observed in this study support the utility of using multiple RAs in large cohort studies using standardised protocols, with the caveat that an absence of any confounding of study estimates by rater is checked, due to systematic rater bias identified in this study. Implications Thorough training of raters using PPT results in excellent interrater reliability. Clinical trials using PPT as an outcome measure should utilise a priori sample size calculations.