Published in

Elsevier, Speech Communication, 5(50), p. 416-433, 2008

DOI: 10.1016/j.specom.2008.01.001

Links

Tools

Export citation

Search in Google Scholar

Influence of contextual information in emotion annotation for spoken dialogue systems

Journal article published in 2008 by Zoraida Callejas ORCID, Ramón López-Cózar
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Orange circle
Postprint: archiving restricted
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

In this paper, we study the impact of considering context information for the annotation of emotions. Concretely, we propose the inclusion of the history of user–system interaction and the neutral speaking style of users. A new method to automatically include both sources of information has been developed making use of novel techniques for acoustic normalization and dialogue context annotation. We have carried out experiments with a corpus extracted from real human interactions with a spoken dialogue system. Results show that the performance of non-expert human annotators and machine-learned classifications are both affected by contextual information. The proposed method allows the annotation of more non-neutral emotions and yields values closer to maximum agreement rates for non-expert human annotation. Moreover, automatic classification accuracy improves by 29.57% compared to the classical approach based only on acoustic features.