Published in

Wiley, Alzheimer's & Dementia: The Journal of the Alzheimer's Association, S5(19), 2023

DOI: 10.1002/alz.062373

Links

Tools

Export citation

Search in Google Scholar

Objective assessment of sleep parameters using multimodal AX3 data in older participants

This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Green circle
Preprint: archiving allowed
Orange circle
Postprint: archiving restricted
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

AbstractBackgroundSleep disturbances are both risk factors for and symptoms of dementia. Current methods for assessing sleep disturbances are largely based on either polysomnography (PSG) which is costly and inconvenient, or self‐ or care‐giver reports which are prone to measurement error. Low‐cost methods to monitor sleep disturbances longitudinally and at scale can be useful for assessing symptom development. Here, we develop deep learning models that use multimodal variables (accelerometers and temperature) recorded by the AX3 to accurately identify sleep and wake epochs and derive sleep parameters.MethodEighteen men and women (65‐80y) participated in a sleep laboratory‐based study in which multiple devices for sleep monitoring were evaluated. PSGs were recorded over a 10‐h period and scored according to established criteria per 30 sec epochs. Tri‐axial accelerometers and temperature signals were captured with an Axivity AX3, at 100Hz and 1Hz, respectively, throughout a 19‐h period, including 10‐h concurrent PSG recording and 9‐h of wakefulness. We developed and evaluated a supervised deep learning algorithm to detect sleep and wake epochs and determine sleep parameters from the multimodal AX3 raw data. We validated our results with gold standard PSG measurements and compared our algorithm to the Biobank accelerometer analysis toolbox. Single modality (accelerometer or temperature) and multimodality (both signals) approaches were evaluated using the 3‐fold cross‐validation.ResultThe proposed deep learning model outperformed baseline models such as the Biobank accelerometer analysis toolbox and conventional machine learning classifiers (Random Forest and Support Vector Machine) by up to 25%. Using multimodal data improved sleep and wake classification performance (up to 18% higher) compared with the single modality. In terms of the sleep parameters, our approach boosted the accuracy of estimations by 11% on average compared to the Biobank accelerometer analysis toolbox.ConclusionIn older adults without dementia, combining multimodal data from AX3 with deep learning methods allows satisfactory quantification of sleep and wakefulness. This approach holds promise for monitoring sleep behaviour and deriving accurate sleep parameters objectively and longitudinally from a low‐cost wearable sensor. A limitation of our current study is that the participants were healthy older adults: future work will focus on people living with dementia.