Abstract

A variety of algorithms allows gesture recognition in video sequences. Alleviating the need for interpreters is of interest to hearing impaired people, since it allows a great degree of self-sufficiency in communicating their intent to the non-sign language speakers without the need for interpreters. State-of-theart in currently used algorithms in this domain is capable of either real-time recognition of sign language in low resolution videos or non-real-time recognition in high-resolution videos. This paper proposes a novel approach to real-time recognition of fingerspelling alphabet letters of American Sign Language (ASL) in ultra-high-resolution (UHD) video sequences. The proposed approach is based on adaptive Laplacian of Gaussian (LoG) filtering with local extrema detection using Features from Accelerated Segment Test (FAST) algorithm classified by a Convolutional Neural Network (CNN). The recognition rate of our algorithm was verified on real-life data.

Details

Title
Recognition of Sign Language from High Resolution Images Using Adaptive Feature Extraction and Classification
Author
Csóka, Filip; Polec, Jaroslav; Csóka, Tibor; Kačur, Juraj
Pages
303-308
Publication year
2019
Publication date
2019
Publisher
Polish Academy of Sciences
ISSN
20818491
e-ISSN
23001933
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2650296706
Copyright
© 2019. This work is licensed under https://creativecommons.org/licenses/by-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.