Springer Verlag, Lecture Notes in Computer Science, p. 646-654
DOI: 10.1007/978-3-540-45179-2_79
Full text: Download
A new direction in improving modern dialogue systems is to make a human-machine dialogue more similar to a human-human dialogue. This can be done by adding more input modalities. One additional modality for automatic dialogue systems is the facial expression of the human user. A common problem in a human-machine dialogue where the angry face may give a clue is the recurrent misunderstanding of the user by the system. Or an helpless face may indicate a naive user who does not know how to utilize the system and should be led through the dialogue step by step. This paper describes recognizing facial expressions in frontal images using eigenspaces. For the classification of facial expressions, rather than using the face whole image we classify regions which do not differ between subjects and at the same time are meaningful for facial expressions. Important regions change when projecting the same face to eigenspaces trained with examples of different facial expressions. The average of different faces showing different facial expressions forms a face mask. This face mask fades out unnecessary or mistakable regions and emphasizes regions changing between facial expressions. Using this face mask for training and classification of neutral and angry expressions of the face, we achieved an improvement of up to 5% points. The proposed method may improve other classification problems that use eigenspace methods as well.