Abstract
This research examines facial expressions as social cues in online platforms, focusing on online learning and remote work. Our high-accuracy emotion recognition framework is designed for post-session evaluation, aiding teaching strategies and identifying students needing support without real-time monitoring. By employing a sophisticated fusion of data, image, and feature-level analysis, complemented by multi-classifier systems, this study capitalizes on the strengths of Siamese networks to achieve a refined understanding of emotion recognition. The investigation spans various age groups and ethnicities, employing multiple datasets such as LIRIS-CSE, Cohn-Kanade, and Jaffe, alongside the author’s datasets for children and teens. This rigorous examination underlines the role of Information Fusion in enriching communication and collaboration within digital interfaces. The research underscores the use of advanced techniques in interpreting facial cues by merging Siamese networks with pretrained models such as VGG 19 and Inception Resnet V2. The paper compares Siamese networks with other architectures for remote work/play, and asserts that such networks are more flexible. Comparatively, networks with Siamese architecture use convolutional neural network and recurrent neural network for input branches while other networks such as VGG19 and Inception Resnet V2 use a single neural network. The SCNN-IRV2 model demonstrated impressive test accuracies, ranging from 96 to 99% for single-emotion models and 84–96% for multi-emotion models, reflecting an improvement of 1.05–10.13% over the Inception Resnet V2 CNN architecture. In a similar vein, the SCNN-VGG19 model achieved test accuracies between 95% and 99% for single-emotion models and 75–94% for multi-emotion models, surpassing the VGG19 CNN architecture by 207–422%. These findings highlight the role of advanced fusion techniques and thoughtful design in improving online education and remote work, fostering progress in emotional data analysis and human-computer interaction.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 Symbiosis International (Deemed University), Symbiosis Centre for Applied Artificial Intelligence, Pune, India (GRID:grid.444681.b) (ISNI:0000 0004 0503 4808)
2 Symbiosis International (Deemed University), Symbiosis Centre for Applied Artificial Intelligence, Pune, India (GRID:grid.444681.b) (ISNI:0000 0004 0503 4808); Symbiosis Institute of Technology, Pune Campus, Symbiosis International (Deemed University), Pune, India (GRID:grid.444681.b) (ISNI:0000 0004 0503 4808)
3 Symbiosis Institute of Technology, Pune Campus, Symbiosis International (Deemed University), Pune, India (GRID:grid.444681.b) (ISNI:0000 0004 0503 4808)
4 AI Consultant, Pune, India (GRID:grid.444681.b)
5 Swinburne University of Technology, School of Engineering, Hawthorn, Australia (GRID:grid.1027.4) (ISNI:0000 0004 0409 2862)




