Published in

2013 Humaine Association Conference on Affective Computing and Intelligent Interaction

DOI: 10.1109/acii.2013.74

Links

Tools

Export citation

Search in Google Scholar

Multimodal Engagement Classification for Affective Cinema

This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

This paper describes a multimodal approach to detect viewers' engagement through psycho-physiological affective signals. We investigate the individual contributions of the different modalities, and report experimental results obtained using several fusion strategies, in both per-clip and per-subject cross-validation settings. A sequence of clips from a short movie was showed to 15 participants, from whom we collected per-clip engagement self-assessments. Cues of the users' affective states were collected by means of (i) galvanic skin response (GSR), (ii) automatic facial tracking, and (iii) electroencephalogram(EEG) signals. The main findings of this study can be summarized as follows: (i) each individual modality significantly encodes the level of engagement of the viewers in response to movie clips, (ii) the GSR and EEG signals provide comparable contributions, and (iii) the best performance is obtained when the three modalities are used together.