Smart Vision System for Assistive Object Recognition and Independent Navigation of the Visually Impaired

International Journal of Emerging Research in Science, Engineering, and Management
Vol. 2, Issue 3, pp. 01-07, March 2026.

https://doi.org/10.58482/ijersem.v2i3.1

Smart Vision System for Assistive Object Recognition and Independent Navigation of the Visually Impaired

G Ravi Kumar

Siripireddy Prathyusha

Shaik Thasleem

Shaik Farukh Ahamed

M G Uma Sankar

Chandragiri Venu Gopal Reddy

Department of CSE, Siddartha Institute of Science and Technology, Puttur, India.

Abstract: Smart Vision is an innovative assistive technology designed to foster independence and safety for visually impaired individuals through real-time situational awareness. By integrating high-performance Computer Vision models, specifically Convolutional Neural Networks (CNN) and YOLO, the system provides instantaneous recognition of objects and obstacles. The framework utilizes a compact, wearable camera or mobile interface to capture continuous visual data, which is then processed and converted into intuitive audio feedback for the user. Engineered as a lightweight and energy-efficient solution, the system addresses the critical need for affordable navigation aids. By reducing dependence on human caregivers, Smart Vision enhances mobility and confidence in diverse environments. This research contributes a robust, scalable architecture that balances computational precision with low-latency performance, offering a transformative tool for navigating the modern world with greater autonomy.

Keywords: Assistive Technology, Object Recognition, YOLO, Visually Impaired, Real-time Navigation.

References: 

  1. K. Tsantikidou, G. Delimpaltadakis, D. Diasakos, and N. Sklavos, “AAL-based smart cane system with security and privacy features for blind and visually impaired individuals,” Microprocessors and Microsystems, vol. 114–115, p. 105155, Apr. 2025, doi: 10.1016/j.micpro.2025.105155.
  2. N. N. Alotaibi, M. M. Alnfiai, M. M. Alnahari, S. M. M. Alnefaie, and F. A. Alotaibi, “Advanced fusion of IoT and AI technologies for smart environments: Enhancing environmental perception and mobility solutions for visually impaired individuals,” Image and Vision Computing, vol. 165, p. 105827, Nov. 2025, doi: 10.1016/j.imavis.2025.105827.
  3. Z. Ji et al., “Optimization model for wireless charging and power saving of smart canes for the visually impaired based on DRL,” Ain Shams Engineering Journal, vol. 16, no. 7, p. 103404, Apr. 2025, doi: 10.1016/j.asej.2025.103404.
  4. E. Malaekah et al., “Sound-based navigation system for visually impaired individuals,” Journal of Radiation Research and Applied Sciences, vol. 19, no. 1, p. 102160, Jan. 2026, doi: 10.1016/j.jrras.2026.102160.
  5. S. Bathool, J. R. M, and V. Biradar, “Tuned improved SqueezeNet with texture pattern extractor based object recognition and distance estimation for navigating visually impaired persons,” Computers & Electrical Engineering, vol. 130, p. 110813, Nov. 2025, doi: 10.1016/j.compeleceng.2025.110813.
  6. P. Pistofidis et al., “Design and evaluation of smart-exhibit systems that enrich cultural heritage experiences for the visually impaired,” Journal of Cultural Heritage, vol. 60, pp. 1–11, Feb. 2023, doi: 10.1016/j.culher.2023.01.004.
  7. C. -Y. Huang, C.-K. Wu, and P.-Y. Liu, “Assistive technology in smart cities: A case of street crossing for the visually-impaired,” Technology in Society, vol. 68, p. 101805, Oct. 2021, doi: 10.1016/j.techsoc.2021.101805.
  8. G. Heng, C. Luo, K. Sun, S. Huang, and L. Huang, “SGBM_YOLO: A high-precision obstacle detection algorithm for assistive navigation based on stereo vision and spatial attention mechanism,” Applied Soft Computing, vol. 186, p. 114144, Oct. 2025, doi: 10.1016/j.asoc.2025.114144.
  9. R. I. Chowdhury, J. Anjom, and Md. I. A. Hossain, “A novel edge intelligence-based solution for safer footpath navigation of visually impaired using computer vision,” Journal of King Saud University – Computer and Information Sciences, vol. 36, no. 8, p. 102191, Sep. 2024, doi: 10.1016/j.jksuci.2024.102191.
  10. A. Noor, H. Almukhalfi, A. Souza, and T. H. Noor, “Towards a Real-Time indoor object Detection for Visually Impaired users using Raspberry Pi 4 and YOLOV11: A Feasibility study,” Computer Modeling in Engineering & Sciences, vol. 144, no. 3, pp. 3085–3111, Jan. 2025, doi: 10.32604/cmes.2025.068393.
  11. F. Machado, M. Loureiro, R. C. Mello, C. A. R. Diaz, and A. Frizera, “A novel mixed reality assistive system to aid the visually and mobility impaired using a multimodal feedback system,” Displays, vol. 79, p. 102480, Jun. 2023, doi: 10.1016/j.displa.2023.102480.
  12. R. Estrada, V. Ponce, J. Maldonado, and I. Vasco, “A Real-Time indoor object detection and distance estimation system for visually impaired individuals,” Procedia Computer Science, vol. 265, pp. 41–48, Jan. 2025, doi: 10.1016/j.procs.2025.07.154.
  13. M. H. Abidi, A. N. Siddiquee, H. Alkhalefah, and V. Srivastava, “A comprehensive review of navigation systems for visually impaired individuals,” Heliyon, vol. 10, no. 11, p. e31825, May 2024, doi: 10.1016/j.heliyon.2024.e31825.
  14. A. B. Atitallah, Y. Said, M. A. B. Atitallah, M. Albekairi, K. Kaaniche, and S. Boubaker, “An effective obstacle detection system using deep learning advantages to aid blind and visually impaired navigation,” Ain Shams Engineering Journal, vol. 15, no. 2, p. 102387, Jul. 2023, doi: 10.1016/j.asej.2023.102387.
  •