Full text

Turn on search term navigation

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Gestures have been used for nonverbal communication for a long time, but human–computer interaction (HCI) via gestures is becoming more common in the modern era. To obtain a greater recognition rate, the traditional interface comprises various devices, such as gloves, physical controllers, and markers. This study provides a new markerless technique for obtaining gestures without the need for any barriers or pricey hardware. In this paper, dynamic gestures are first converted into frames. The noise is removed, and intensity is adjusted for feature extraction. The hand gesture is first detected through the images, and the skeleton is computed through mathematical computations. From the skeleton, the features are extracted; these features include joint color cloud, neural gas, and directional active model. After that, the features are optimized, and a selective feature set is passed through the classifier recurrent neural network (RNN) to obtain the classification results with higher accuracy. The proposed model is experimentally assessed and trained over three datasets: HaGRI, Egogesture, and Jester. The experimental results for the three datasets provided improved results based on classification, and the proposed system achieved an accuracy of 92.57% over HaGRI, 91.86% over Egogesture, and 91.57% over the Jester dataset, respectively. Also, to check the model liability, the proposed method was tested on the WLASL dataset, attaining 90.43% accuracy. This paper also includes a comparison with other-state-of-the art methods to compare our model with the standard methods of recognition. Our model presented a higher accuracy rate with a markerless approach to save money and time for classifying the gestures for better interaction.

Details

Title
Smart Home Automation-Based Hand Gesture Recognition Using Feature Fusion and Recurrent Neural Network
Author
Bayan Ibrahimm Alabdullah 1 ; Hira Ansar 2 ; Naif Al Mudawi 3   VIAFID ORCID Logo  ; Alazeb, Abdulwahab 3 ; Alshahrani, Abdullah 4 ; Alotaibi, Saud S 5   VIAFID ORCID Logo  ; Ahmad, Jalal 2 

 Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia; [email protected] 
 Department of Computer Science, Air University, E-9, Islamabad 44000, Pakistan; [email protected] 
 Department of Computer Science, College of Computer Science and Information System, Najran University, Najran 55461, Saudi Arabia; [email protected] 
 Department of Computer Science and Artificial Intelligence, College of Computer Science and Engineering, University of Jeddah, Jeddah 21589, Saudi Arabia; [email protected] 
 Information Systems Department, Umm Al-Qura University, Makkah 24382, Saudi Arabia; [email protected] 
First page
7523
Publication year
2023
Publication date
2023
Publisher
MDPI AG
e-ISSN
14248220
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2862730254
Copyright
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.