Published in

Springer, Journal on Multimodal User Interfaces, 2(8), p. 151-160, 2014

DOI: 10.1007/s12193-013-0142-z

Links

Tools

Export citation

Search in Google Scholar

Predicting online lecture ratings based on gesturing and vocal behavior

This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Orange circle
Postprint: archiving restricted
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Nonverbal behavior plays an important role in any human-human interaction. Teaching-an inherently social activity-is not an exception. So far, the effect of nonverbal behavioral cues accompanying lecture delivery was investigated in the case of traditional ex-cathedra lectures, where students and teachers are co-located. However, it is becoming increasingly more frequent to watch lectures online and, in this new type of setting, it is still unclear what the effect of nonverbal communication is. This article tries to address the problem and proposes experiments performed over the lectures of a popular web repository ("Videolectures"). The results show that automatically extracted nonverbal behavioral cues (prosody, voice quality and gesturing activity) predict the ratings that "Videolectures" users assign to the presentations.