Content area
When developing a continuous sign language recognition (CSLR) system, a significant challenge lies in processing the vast number of video frames, which demands extensive time and computational resources during both the training and prediction phases. To address this, we propose an efficient and scalable methodology that integrates cluster-based key frame extraction with a VOGUE-based recognition model designed for continuous gestures. The key frame extraction strategy clusters visually similar frames to reduce redundancy while preserving only those with high semantic relevance. To further enhance recognition accuracy, we introduce the Key Curvature Maximum Point (KCMP) technique, which identifies pivotal motion points and captures essential hand trajectory changes inherent to sign language. These refined frames are subsequently used to train a VOGUE-based model that encodes spatial and temporal strokes dynamics, followed by probability distribution modeling for robust prediction. The proposed approach was evaluated using a custom-built Tamil Sign Language dataset. Performance was compared against several established baseline methods, including Dynamic Time Warping (DTW), Hidden Markov Models (HMM), and multiple Conditional Random Field (CRF) variants, as well as the VOM model. The system achieved a recognition accuracy of 86.78% and a sign error rate of 5.3%. A paired t-test confirmed that the improvements over baseline models were statistically significant (p < 0.05). These results demonstrate that the proposed framework provides improved efficiency and competitive accuracy, offering a promising solution for real-time CSLR applications, particularly in low-resource regional sign languages.
