International Journal of Emerging Research in Science, Engineering, and Management
Vol. 2, Issue 3, pp. 169-177, March 2026.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Sign Language Recognition using Multi‑Layer Perceptron
R Priyadarshini
Abburi Varshitha
N Bhavana
Hema Sri
V Akhilesh
B Sai Rupesh
Department of CSE, Siddartha Institute of Science and Technology, Puttur, Andhra Pradesh, India
Abstract: Sign language is the primary means of communication for individuals who are deaf or hard of hearing, yet it remains largely inaccessible to the hearing population, creating significant communication barriers. To address this challenge, this paper presents a real-time sign language recognition (SLR) system designed for deployment on resource-constrained devices. The proposed approach captures hand gestures using a standard camera and extracts structured hand landmark features through the MediaPipe framework. These features are processed using a lightweight deep neural network optimized for efficient inference under TinyML constraints. The system converts recognized gestures into corresponding textual outputs and supports sentence construction for continuous interaction. Experimental evaluation on a large-scale dataset containing over 250 gesture classes demonstrates that the proposed method achieves high accuracy while maintaining low computational overhead. The results highlight the feasibility of deploying practical, real-time sign language recognition systems for accessible human–computer interaction.
Keywords: Sign Language Recognition, TinyML, MediaPipe, Hand Gestures, Human–Computer Interaction
References:
- M. Madhiarasan and P. P. Roy, “A Comprehensive review of sign language recognition: different types, modalities, and datasets,” arXiv.org, Apr. 07, 2022. https://arxiv.org/abs/2204.03328.
- Satwik Ram Kodandaram, N Pavan Kumar, Sunil G L, “Sign language recognition,” Turkish Journal of Computer and Mathematics Education, vol. 12, no. 14, pp. 994–1009, 2021, doi: 10.17762/turcomat.v12i14.10381.
- M. J. Cheok, Z. Omar, and M. H. Jaward, “A review of hand gesture and sign language recognition techniques,” International Journal of Machine Learning and Cybernetics, vol. 10, no. 1, pp. 131–153, Aug. 2017, doi: 10.1007/s13042-017-0705-5.
- R. Fatmi, S. Rashad and R. Integlia, “Comparing ANN, SVM, and HMM based Machine Learning Methods for American Sign Language Recognition using Wearable Motion Sensors,” 2019 IEEE 9th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 2019, pp. 0290-0297, doi: 10.1109/CCWC.2019.8666491.
- J. Zhang, W. Zhou, C. Xie, J. Pu and H. Li, “Chinese sign language recognition with adaptive HMM,” 2016 IEEE International Conference on Multimedia and Expo (ICME), Seattle, WA, USA, 2016, pp. 1-6, doi: 10.1109/ICME.2016.7552950.
- M. Al-Hammadi, G. Muhammad, W. Abdul, M. Alsulaiman, M. A. Bencherif and M. A. Mekhtiche, “Hand Gesture Recognition for Sign Language Using 3DCNN,” in IEEE Access, vol. 8, pp. 79491-79509, 2020, doi: 10.1109/ACCESS.2020.2990434.
- C. K. M. Lee, K. K. H. Ng, C.-H. Chen, H. C. W. Lau, S. Y. Chung, and T. Tsoi, “American sign language recognition and training method with recurrent neural network,” Expert Systems With Applications, vol. 167, p. 114403, Dec. 2020, doi: 10.1016/j.eswa.2020.114403.
- O. Koller, N. C. Camgoz, H. Ney and R. Bowden, “Weakly Supervised Learning with Multi-Stream CNN-LSTM-HMMs to Discover Sequential Parallelism in Sign Language Videos,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 9, pp. 2306-2320, 1 Sept. 2020, doi: 10.1109/TPAMI.2019.2911077.
- R. Rastgoo, K. Kiani, and S. Escalera, “Hand sign language recognition using multi-view hand skeleton,” Expert Systems With Applications, vol. 150, p. 113336, Feb. 2020, doi: 10.1016/j.eswa.2020.113336.
- K. Kudrinko, E. Flavin, X. Zhu and Q. Li, “Wearable Sensor-Based Sign Language Recognition: A Comprehensive Review,” in IEEE Reviews in Biomedical Engineering, vol. 14, pp. 82-97, 2021, doi: 10.1109/RBME.2020.3019769.
- G. Yuan, X. Liu, Q. Yan, S. Qiao, Z. Wang and L. Yuan, “Hand Gesture Recognition Using Deep Feature Fusion Network Based on Wearable Sensors,” in IEEE Sensors Journal, vol. 21, no. 1, pp. 539-547, 1 Jan.1, 2021, doi: 10.1109/JSEN.2020.3014276.
- S. Aly and W. Aly, “DeepArSLR: A Novel Signer-Independent Deep Learning Framework for Isolated Arabic Sign Language Gestures Recognition,” in IEEE Access, vol. 8, pp. 83199-83212, 2020, doi: 10.1109/ACCESS.2020.2990699.
- .P. Kumar, P. P. Roy, and D. P. Dogra, “Independent Bayesian classifier combination based sign language recognition using facial expression,” Information Sciences, vol. 428, pp. 30–48, Oct. 2017, doi: 10.1016/j.ins.2017.10.046.
- O. M. Sincan and H. Y. Keles, “AUTSL: a Large Scale Multi-Modal Turkish Sign Language dataset and baseline methods,” IEEE Access, vol. 8, pp. 181340–181355, Jan. 2020, doi: 10.1109/access.2020.3028072.
