Dissemin is shutting down on January 1st, 2025

Published in

Public Library of Science, PLoS ONE, 3(19), p. e0300518, 2024

DOI: 10.1371/journal.pone.0300518

Interspeech 2022, 2022

DOI: 10.21437/interspeech.2022-10188

Links

Tools

Export citation

Search in Google Scholar

Automatic Detection of Expressed Emotion from Five-Minute Speech Samples: Challenges and Opportunities

This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Green circle
Published version: archiving allowed
Data provided by SHERPA/RoMEO

Abstract

Research into clinical applications of speech-based emotion recognition (SER) technologies has been steadily increasing over the past few years. One such potential application is the automatic recognition of expressed emotion (EE) components within family environments. The identification of EE is highly important as they have been linked with a range of adverse life events. Manual coding of these events requires time-consuming specialist training, amplifying the need for automated approaches. Herein we describe an automated machine learning approach for determining the degree of warmth, a key component of EE, from acoustic and text natural language features. Our dataset of 52 recorded interviews is taken from recordings, collected over 20 years ago, from a nationally representative birth cohort of British twin children, and was manually coded for EE by two researchers (inter-rater reliability 0.84–0.90). We demonstrate that the degree of warmth can be predicted with an F1-score of 64.7% despite working with audio recordings of highly variable quality. Our highly promising results suggest that machine learning may be able to assist in the coding of EE in the near future.