Published in

2015 IEEE International Conference on Computer Vision Workshop (ICCVW)

DOI: 10.1109/iccvw.2015.96

Links

Tools

Export citation

Search in Google Scholar

Tracking the Active Speaker Based on a Joint Audio-Visual Observation Model

Journal article published in 2015 by Israel Dejene Gebru, Silèye, Sileye Ba, Georgios Evangelidis, Radu Horaud
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Any multi-party conversation system benefits from speaker diarization, that is, the assignment of speech signals among the participants. We here cast the diarization problem into a tracking formulation whereby the active speaker is detected and tracked over time. A probabilistic tracker exploits the on-image (spatial) coincidence of visual and auditory observations and infers a single latent variable which represents the identity of the active speaker. Both visual and auditory observations are explained by a recently proposed weighted-data mixture model, while several options for the speaking turns dynamics are fulfilled by a multi-case transition model. The modules that translate raw audio and visual data into on-image observations are also described in detail. The performance of the proposed tracker is tested on challenging data-sets that are available from recent contributions which are used as baselines for comparison.