Full text: Unavailable
AbstractBackgroundTourette syndrome (TS) tics are typically quantified using “paper and pencil” rating scales that are susceptible to factors that adversely impact validity. Video‐based methods to more objectively quantify tics have been developed but are challenged by reliance on human raters and procedures that are resource intensive. Computer vision approaches that automate detection of atypical movements may be useful to apply to tic quantification.ObjectiveThe current proof‐of‐concept study applied a computer vision approach to train a supervised deep learning algorithm to detect eye tics in video, the most common tic type in patients with TS.MethodsVideos (N = 54) of 11 adolescent patients with TS were rigorously coded by trained human raters to identify 1.5‐second clips depicting “eye tic events” (N = 1775) and “non‐tic events” (N = 3680). Clips were encoded into three‐dimensional facial landmarks. Supervised deep learning was applied to processed data using random split and disjoint split regimens to simulate model validity under different conditions.ResultsArea under receiver operating characteristic curve was 0.89 for the random split regimen, indicating high accuracy in the algorithm's ability to properly classify eye tic vs. non–eye tic movements. Area under receiver operating characteristic curve was 0.74 for the disjoint split regimen, suggesting that algorithm generalizability is more limited when trained on a small patient sample.ConclusionsThe algorithm was successful in detecting eye tics in unseen validation sets. Automated tic detection from video is a promising approach for tic quantification that may have future utility in TS screening, diagnostics, and treatment outcome measurement. © 2023 The Authors. Movement Disorders published by Wiley Periodicals LLC on behalf of International Parkinson and Movement Disorder Society.