Published in

2006 International Conference on Image Processing

DOI: 10.1109/icip.2006.312923

Links

Tools

Export citation

Search in Google Scholar

Extracting Static Hand Gestures in Dynamic Context

Proceedings article published in 2006 by Thomas Burger, Alexandre Benoit, Andalice Caplier
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Cued speech is a specific visual coding that complements oral language lip-reading, by adding static hand gestures (a static gesture can be presented on a single photograph as it contains no motion). By nature, cued speech is simple enough to be believed as automatically recognizable. Unfortunately, despite its static definition, fluent cued speech has an important dynamic dimension due to co-articulation. Hence, the reduction from a continuous cued speech coding stream to the corresponding discrete chain of static gestures is really an issue for automatic cued speech processing. We present here how the biological motion analysis method are presented has been combined with a fusion strategy based on the belief theory in order to perform such a reduction