Published in

MDPI, Information, 2(14), p. 83, 2023

DOI: 10.3390/info14020083

Links

Tools

Export citation

Search in Google Scholar

Mixing Global and Local Features for Long-Tailed Expression Recognition

Journal article published in 2023 by Jiaxiong Zhou, Jian Li, Yubo Yan, Lei Wu, Hao Xu
This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Green circle
Published version: archiving allowed
Data provided by SHERPA/RoMEO

Abstract

Large-scale facial expression datasets are primarily composed of real-world facial expressions. Expression occlusion and large-angle faces are two important problems affecting the accuracy of expression recognition. Moreover, because facial expression data in natural scenes commonly follow a long-tailed distribution, trained models tend to recognize the majority classes while recognizing the minority classes with low accuracies. To improve the robustness and accuracy of expression recognition networks in an uncontrolled environment, this paper proposes an efficient network structure based on an attention mechanism that fuses global and local features (AM-FGL). We use a channel spatial model and local feature convolutional neural networks to perceive the global and local features of the human face, respectively. Because the distribution of real-world scene field expression datasets commonly follows a long-tail distribution, where neutral and happy expressions account for the tail expressions, a trained model exhibits low recognition accuracy for tail expressions such as fear and disgust. CutMix is a novel data enhancement method proposed in other fields; thus, based on the CutMix concept, a simple and effective data-balancing method is proposed (BC-EDB). The key idea is to paste key pixels (around eyes, mouths, and noses), which reduces the influence of overfitting. Our proposed method is more focused on the recognition of tail expression, occluded expression, and large-angle faces, and we achieved the most advanced results in occlusion-RAF-DB, 30∘ pose-RAF-DB, and 45∘ pose-RAF-DB with accuracies of 86.96%, 89.74%, and 88.53%.