Development of Sign Language Translator based on Gesture-to-Word Conversion

International Journal of Emerging Research in Science, Engineering, and Management
Vol. 1, Issue 6, pp. 63-69, December 2025.

https://doi.org/10.58482/ijersem.v1i6.8

Development of Sign Language Translator based on Gesture-to-Word by using IoT

V Gopi

CH. Vedhasree

K. Vishnu Vardhan Reddy

K. Rohitha Sai

S. Nishitha

S. Eswer Reddy

Department of CSE, Siddartha Institute of Science and Technology, Puttur, India.

Abstract: Sign language is the primary mode of communication for individuals with hearing and speech impairments; however, the lack of sign language knowledge among the general population creates a significant communication barrier. Recent advancements in computer vision and deep learning have enabled the development of automated sign language recognition and translation systems. Despite this progress, many existing solutions depend on wearable devices, are computationally expensive, or lack real-time gesture-to-word and speech translation capabilities suitable for practical deployment. This paper presents the development of a sign language translator that maps gestures to words using a vision-based approach. The proposed system captures hand gestures through a camera, recognises sign language gestures using deep learning techniques, and converts them into meaningful text and speech output. The system is designed to be cost-effective, real-time, and user-friendly, eliminating the need for sensor-based gloves. By focusing on gesture-based translation, the proposed approach aims to enhance accessibility and enable effective communication between hearing-impaired individuals and non-signers in real-world environments.

Keywords: Sign Language Translation, Gesture Recognition, Computer Vision, Deep Learning, Assistive Technology, Human-Computer Interaction.

References: 

  1. M. M. Czajka, D. Kubacka, and A. Świetlicka, “Embedding representation of words in sign language,” Journal of Computational and Applied Mathematics, vol. 465, p. 116590, Feb. 2025, doi: 10.1016/j.cam.2025.116590.
  2. M. Mosleh, R. A. A. Mohammed, A. A. A. Mohammed, and A. H. Gumaei, “ArYSL: Arabic Yemeni sign language dataset,” Data in Brief, vol. 62, p. 111996, Aug. 2025, doi: 10.1016/j.dib.2025.111996.
  3. S. Javaid, S. Sajid, and Y. K. Baloch, “UAlpha40: A comprehensive dataset of Urdu alphabet for Pakistan sign language,” Data in Brief, vol. 59, p. 111342, Jan. 2025, doi: 10.1016/j.dib.2025.111342.
  4. Y. Alkharijah, S. Khalid, S. M. Usman, A. Jameel, and D. Hamid, “Fusing geometric and temporal deep features for High-Precision Arabic sign language recognition,” Computer Modeling in Engineering & Sciences, vol. 144, no. 1, pp. 1113–1141, Jan. 2025, doi: 10.32604/cmes.2025.068726.
  5. A. Tripathi et al., “Intelligent sign language recognition for Real-Time text conversion to aid speech and hearing impaired,” Procedia Computer Science, vol. 259, pp. 1472–1478, Jan. 2025, doi: 10.1016/j.procs.2025.04.102.
  6. H. Kar and V. P, “Nth layer Hierarchical Bidirectional LSTM sign language Interpretation for Hearing Impaired person,” Procedia Computer Science, vol. 258, pp. 3175–3183, Jan. 2025, doi: 10.1016/j.procs.2025.04.575.
  7. S. Ingoley and J. Bakal, “Interpretation of Indian Sign Language to Text and Speech to Communicate with Speech and Hearing-Impaired Community,” Procedia Computer Science, vol. 258, pp. 1980–1992, Jan. 2025, doi: 10.1016/j.procs.2025.04.449.
  8. K. Keli‘Ipa‘Akaua, S. Muneoka, K. K. Lyon, and K. L. Braun, “In our own voices and words: Creating English- and Hawaiian-language storybooks on dementia,” SSM – Mental Health, vol. 8, p. 100469, Jun. 2025, doi: 10.1016/j.ssmmh.2025.100469.
  9. Y. Abhishek and D. Sumanathilaka, “End-to-End Sign Language Recognition Pipeline: Towards Energy Efficient Modelling,” Procedia Computer Science, vol. 265, pp. 483–490, Jan. 2025, doi: 10.1016/j.procs.2025.07.208.
  10. M. S. Marcolino et al., “Sign Language Recognition System for Deaf Patients: Protocol for a Systematic Review,” JMIR Research Protocols, vol. 14, p. e55427, Jun. 2024, doi: 10.2196/55427.
  11. L. Ismail, N. Shahin, H. Tesfaye, and A. Hennebelle, “VisioSLR: a Vision Data-Driven framework for sign language video recognition and performance evaluation on Fine-Tuned YOLO models,” Procedia Computer Science, vol. 257, pp. 85–92, Jan. 2025, doi: 10.1016/j.procs.2025.03.014.