Unveiling the Hidden Language of Hands: Automated Sign Language Recognition (ASLR) and Translation through the Lens of Deep Learning
DOI:
https://doi.org/10.55011/xkesj437Keywords:
Deep Learning, Automated Sign Language RecognitionAbstract
Aiming to close the divide in communication between hearing and deaf-people, Automated Sign Language Recognition (ASLR) is an exciting and significant field of study. This paper aims to build Deep Learning-based Automated Sign Language Recognition (ASLR) system that recognizes sign language gestures from video clips and translates them into written or spoken language. Using human pose estimation, key body joint points are extracted from each video frame to capture both spatial and temporal features. A deep neural network processes these sequences for accurate gesture recognition and translation. The system outperforms traditional methods like HMM and SVM in accuracy and generalization. Designed for real-time use, it promotes inclusive communication and offers a practical solution for bridging the gap between signers and non-signers. Deep Learning proves highly effective for ASLR and sign language translation, hitting top benchmark results in detecting and interpreting a broad spectrum of sign gestures. The system showcases promising real-time processing capabilities, making it suitable for real-world applications and interactive user interfaces
