Drone Navigation Using DRL

International Journal of Emerging Research in Science, Engineering, and Management
Vol. 2, Issue 1, pp. 280-287, January 2026.

https://doi.org/10.58482/ijersem.v2i1.38

Drone Navigation Using DRL

N Gowthami

R Likitha

K Bharath

P Kumari Padmaja

S Mahesh Reddy

Department of CSE, Siddartha Institute of Science and Technology, Puttur, India.

Abstract: This research explores the implementation of Deep Reinforcement Learning (DRL) to facilitate autonomous drone navigation within complex and unpredictable environments. Traditional navigation systems often rely on rigid, pre-programmed trajectories that struggle with real-time obstacles or environmental shifts. To overcome these limitations, the proposed framework utilizes a trial-and-error learning mechanism, allowing the unmanned aerial vehicle (UAV) to autonomously discover optimal flight paths and obstacle-avoidance strategies through continuous interaction with its surroundings. .By integrating high-frequency environmental sensing with adaptive learning algorithms, the system enhances its navigational precision and safety across diverse settings, including urban landscapes, rural terrains, and confined indoor spaces. A core component of the framework is the integration of proactive collision prediction and avoidance strategies, which significantly bolster operational reliability. The architecture is designed with scalability in mind, providing a foundation for multi-drone coordination and collaborative mission execution in high-density scenarios. This DRL-driven approach represents a shift toward truly intelligent, self-evolving aerial robotics capable of maintaining high mission success rates in dynamic, “in-the-wild” conditions.

Keywords: Deep Reinforcement Learning, Autonomous Navigation, Drone Technology, Obstacle Avoidance, Dynamic Environments.

References: 

  1. O. Doukhi and D. J. Lee, “Deep reinforcement learning for autonomous Map-Less navigation of a flying robot,” IEEE Access, vol. 10, pp. 82964–82976, Jan. 2022, doi: 10.1109/access.2022.3162702.
  2. A. Wei, J. Liang, K. Lin, Z. Li, and R. Zhao, “DTPPO: Dual-Transformer Encoder-Based Proximal Policy Optimization for Multi-UAV navigation in unseen complex environments,” Drones, vol. 8, no. 12, p. 720, Nov. 2024, doi: 10.3390/drones8120720.
  3. J. Wu, Y. Ye, and J. Du, “Autonomous Drones in Urban Navigation: Autoencoder Learning Fusion for Aerodynamics,” Journal of Construction Engineering and Management, vol. 150, no. 7, Apr. 2024, doi: 10.1061/jcemd4.coeng-14787.
  4. Y. Sheng, H. Liu, J. Li, and Q. Han, “UAV Autonomous navigation based on deep Reinforcement learning in Highly Dynamic and High-Density Environments,” Drones, vol. 8, no. 9, p. 516, Sep. 2024, doi: 10.3390/drones8090516.
  5. H. Li, Q. Zhang, and D. Zhao, “Deep Reinforcement Learning-Based Automatic exploration for navigation in unknown environment,” IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 6, pp. 2064–2076, Aug. 2019, doi: 10.1109/tnnls.2019.2927869.
  6. S. Deniz, A. Barzegar, F. Y. Sagirli, K. Vasiloff, and Z. Wang, “Deep reinforcement learning approach to autonomous landing of eVTOL vehicles for advanced air mobility,” Control Engineering Practice, vol. 164, p. 106519, Aug. 2025, doi: 10.1016/j.conengprac.2025.106519.
  7. F. Wang, X. Zhu, Z. Zhou, and Y. Tang, “Deep-reinforcement-learning-based UAV autonomous navigation and collision avoidance in unknown environments,” Chinese Journal of Aeronautics, vol. 37, no. 3, pp. 237–257, Oct. 2023, doi: 10.1016/j.cja.2023.09.033.
  8. T. Cattai et al., “Multi-UAV reinforcement learning with realistic communication models: recent advances and challenges,” IEEE Open Journal of Vehicular Technology, vol. 6, pp. 2067–2081, Jan. 2025, doi: 10.1109/ojvt.2025.3586774.
  9. Y. Wang, Y. Jiang, H. Xu, C. Xiao, and K. Zhao, “Research on Unmanned Aerial Vehicle Intelligent Maneuvering Method based on Hierarchical proximal Policy optimization,” Processes, vol. 13, no. 2, p. 357, Jan. 2025, doi: 10.3390/pr13020357.
  10. A. V. R. Katkuri, H. Madan, N. Khatri, A. S. H. Abdul-Qawy, and K. S. Patnaik, “Autonomous UAV navigation using deep learning-based computer vision frameworks: A systematic literature review,” Array, vol. 23, p. 100361, Aug. 2024, doi: 10.1016/j.array.2024.100361.
  11. M. K. Dr. P. K. Sharma, “Deep Reinforcement Learning for Autonomous Drone Navigation: a review,” Dec. 01, 2025. https://ijrt.org/j/article/view/621.
  12. J. Xing, A. Romero, L. Bauersfeld, and D. Scaramuzza, “Bootstrapping Reinforcement Learning with Imitation for Vision-Based Agile Flight,” arXiv.org, Mar. 18, 2024. https://arxiv.org/abs/2403.12203.