Abstract

The hand gesture recognition concept has recently been recognized as an essential part of the human-computer interaction (HCI) concept. Detecting and interpreting hand gestures is a very important topic. This is due to the intense desire to make communication between humans and the calculator or other device natural, away from wires, mouse, keyboards, and others. This recognition makes it possible for computers to capture and understand hand motions. Hand gestures are an important kind of nonverbal communication for a variety of reasons, including their usage in a variety of medical applications, communication between people who are hearing impaired, and robot control. Given the importance of applications for hand gesture recognition and technological progress in today's world, the purpose of the research is to shed light on the most important stage in hand gesture recognition, which is the process of detection and identifying hand gestures in the general sense; segmenting the image to obtain hand gestures before entering them into the feature extraction stages and classification. Six commonly used image segmentation methods were tested on a set of American Sign Language images in a variety of lighting conditions. When compared to the clustering and Otsu methods, the best segmenting results in terms of accuracy were obtained using the Canny and HSV color spaces.

Details

Title
Detecting Hand Gestures Using Machine Learning Techniques
Author
Fadel, Noor; Abdul Kareem, Emad I
Pages
957-965
Publication year
2022
Publication date
Dec 2022
Publisher
International Information and Engineering Technology Association (IIETA)
ISSN
16331311
e-ISSN
21167125
Source type
Scholarly Journal
Language of publication
French
ProQuest document ID
2803914787
Copyright
© 2022. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.