Deep Learning-Based Visual Navigation Algorithms for Mobile Robots: A Comprehensive Study

Wei Yu, Xinzhi Tian

Abstract


This research addresses the challenges faced by mobile robots in efficiently navigating complex environments. A novel approach is proposed, leveraging deep learning techniques, and introducing the Neo model. The method combines Split Attention with the Res-NeSt50 network to enhance the recognition accuracy
of key features in the observed images. Furthermore, improvements have been made in the loss calculation method to improve navigation accuracy across different scenarios. Evaluations conducted on AI2-THOR and active vision datasets demonstrate that the improved model achieves higher average navigation accuracy (92.3%) in scene 4 compared to other methods. The success rate of navigation reached 36.8%, accompanied by a 50% reduction in ballistic length. Additionally, compared to HAUSR and LSTM-Nav, this technology significantly reduced collision rates to 0.01 and reduced time consumption by over 8 seconds. The research methodology addresses navigation model accuracy, speed, and generalization issues, thus making significant advancements for intelligent autonomous robots.

 


Keywords


deep learning, Neo-model, mobile robots, visual navigation, split attention, ResNet

Full Text:

PDF


Creative Commons License
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.

Crossref Similarity Check logo

Crossref logologo_doaj

 Hrvatski arhiv weba logo