Published in

Springer, Lecture Notes in Computer Science, p. 311-322, 2011

DOI: 10.1007/978-3-642-25346-1_28

Links

Tools

Export citation

Search in Google Scholar

Sparse Temporal Representations for Facial Expression Recognition

Proceedings article published in 2011 by Sien W. Chew, R. Rana ORCID, Patrick Lucey, Simon Lucey, Sridha Sridharan
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Red circle
Preprint: archiving forbidden
Orange circle
Postprint: archiving restricted
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

In automatic facial expression recognition, an increasing number of techniques had been proposed for in the literature that exploits the temporal nature of facial expressions. As all facial expressions are known to evolve over time, it is crucially important for a classifier to be capable of modelling their dynamics. We establish that the method of sparse representation (SR) classifiers proves to be a suitable candidate for this purpose, and subsequently propose a framework for expression dynamics to be efficiently incorporated into its current formulation. We additionally show that for the SR method to be applied effectively, then a certain threshold on image dimensionality must be enforced (unlike in facial recognition problems). Thirdly, we determined that recognition rates may be significantly influenced by the size of the projection matrix Φ. To demonstrate these, a battery of experiments had been conducted on the CK+ dataset for the recognition of the seven prototypic expressions − anger, contempt, disgust, fear, happiness, sadness and surprise − and comparisons have been made between the proposed temporal-SR against the static-SR framework and state-of-the-art support vector machine.