Abstract

Facial Emotion Recognition (FER) is a very challenging task due to the varying nature of facial expressions, occlusions, illumination, pose variations, cultural and gender differences, and many other aspects that cause a drastic degradation in quality of facial images. In this paper, an anti-aliased deep convolution network (AA-DCN) model has been developed and proposed to explore how anti-aliasing can increase and improve recognition fidelity of facial emotions. The AA-DCN model detects eight distinct emotions from image data. Furthermore, their features have been extracted using the proposed model and numerous classical deep learning algorithms. The proposed AA-DCN model has been applied to three different datasets to evaluate its performance: The Cohn-Kanade Extending (CK+) database has been utilized, achieving an ultimate accuracy of 99.26% in (5 min, 25 s), the Japanese female facial expressions (JAFFE) obtained 98% accuracy in (8 min, 13 s), and on one of the most challenging FER datasets; the Real-world Affective Face (RAF) dataset; reached 82%, in low training time (12 min, 2s). The experimental results demonstrate that the anti-aliased DCN model is significantly increasing emotion recognition while improving the aliasing artifacts caused by the down-sampling layers.

Details

Title
Improved facial emotion recognition model based on a novel deep convolutional structure
Author
Elsheikh, Reham A. 1 ; Mohamed, M. A. 1 ; Abou-Taleb, Ahmed Mohamed 1 ; Ata, Mohamed Maher 2 

 Mansoura University, Department of Electronics and Communications Engineering, Faculty of Engineering, Mansoura, Egypt (GRID:grid.10251.37) (ISNI:0000 0001 0342 6662) 
 Zewail City of Science and Technology, School of Computational Sciences and Artificial Intelligence (CSAI), 6th of October City, Egypt (GRID:grid.440881.1) (ISNI:0000 0004 0576 5483) 
Pages
29050
Publication year
2024
Publication date
2024
Publisher
Nature Publishing Group
e-ISSN
20452322
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3132206747
Copyright
© The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.