Research on Applications of Image Recognition in the Design of Autonomous Navigation Robots

Authors

  • Ruining Yu

DOI:

https://doi.org/10.62051/bev20x86

Keywords:

Image recognition; Deep learning; Autonomous navigation robots; Path planning.

Abstract

This study evaluates traditional and deep learning-based image recognition technologies in autonomous navigation robots, detailing their strengths and limitations. Traditional image recognition techniques, which rely on predefined algorithms and pattern recognition, have proven efficient and stable for navigation in controlled environments. Conversely, deep learning approaches, notably through convolutional neural networks (CNNs), excel in dynamic and unpredictable settings by adapting more effectively to complex environmental interactions. The core challenges in this field include the need for real-time data processing, achieving consistent accuracy across diverse environments, and managing the computational demands of sophisticated algorithms. This research highlights the significant improvements deep learning techniques bring to autonomous navigation, particularly in terms of adaptability and robustness. The study also addresses the necessity for advancements in both technology domains to meet the evolving demands of autonomous robotics, emphasizing the ongoing need to enhance computational efficiency and environmental adaptability. The findings suggest a growing potential for future research to explore hybrid models that leverage the predictability of traditional methods with the flexibility of deep learning to optimize autonomous navigational tasks.

Downloads

Download data is not yet available.

References

K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778. IEEE, New York (2016).

M. Tan, Q. V. Le, EfficientNet: Rethinking model scaling for convolutional neural networks. In: 36th International Conference on Machine Learning, pp. 1–11. ICML, New York (2019).

J. Buolamwini, T. Gebru, Gender shades: Intersectional accuracy disparities in commercial gender classification, Proc. Mach. Learn. Res. 81 (2018) 1–15.

U. Orozco-Rosas, K. Picos, O. Montiel, R. Sepúlveda, V. H. Díaz-Ramírez, Obstacle recognition for path planning in autonomous mobile robots, in Proc. SPIE 9970, Optics and Photonics for Information Processing X, K.M. Iftekharuddin, A.S. Awwal, M.G. Vázquez, A. Márquez, M.A. Matin (Eds.), 99700X, 1–10 (2016).

J. M. Bolanos, W. M. Meléndez, L. Fermín-Leon, J. Cappelletto, G. Fernéndez-Lopez, J. C. Grieco, Object recognition for obstacle avoidance in mobile robots, in Proc. ICAISC 2006, L. Rutkowski, R. Tadeusiewicz, L.A. Zadeh, J.M. Żurada (Eds.), 4029, 722–730 (2006).

U. Orozco-Rosas, K. Picos, O. Montiel, O. Castillo, Environment recognition for path generation in autonomous mobile robots, in Proc. Hybrid Intelligent Systems in Control, Pattern Recognition and Medicine, O. Castillo, P. Melin (Eds.), 827, 273–288 (2020).

Y. Zheng, B. Yan, C. Ma, X. Wang, H. Xue, Research on obstacle detection and path planning based on visual navigation for mobile robot, J. Phys. Conf. Ser. 1601 (2020) 062044.

M. Sugiura, H. Sakazaki, M. Shimizu, K. Kobayashi, K. Watanabe, A study of omni-directional image based environment recognition for mobile robots. In: SICE Annual Conference 2007, pp. 2034–2037. IEEE, New York (2007).

M. Horst, R. Möller, Visual place recognition for autonomous mobile robots, Robotics 6 (2017) 9.

H. Fujiyoshi, T. Hirakawa, T. Yamashita, Deep learning-based image recognition for autonomous driving, IATSS Res. 43 (2019) 244–252.

S. Hoshino, K. Niimura, Robot vision system for real-time human detection and action recognition, in Proc. IAS, M. Strand, R. Dillmann, E. Menegatti, S. Ghidoni (Eds.), 867, 507–519 (2019).

X. Zhao, P. Sun, Z. Xu, H. Min, H. Yu, Fusion of 3D LIDAR and camera data for object detection in autonomous vehicle applications, IEEE Sens. J. 20 (2020) 4901–4913.

W. de Silva, N. L. Vijaykumar, S. A. Sandri, H. F. de Campos Velho, Z. Sjanic, E.H. Shiguemori, O. Saotome, Image edge extraction by artificial intelligence schemes for UAV autonomous navigation, Proc. Ser. Brazilian Soc. Comput. Appl. Math. 7 (2020) 1–7.

H. L. Lv, S. J. Zhang, D. R. Ding, Y. X. Wang, Path planning based on learning strategy for improved DQN, IEEE Access, 7 (2019) 67319–67330.

J. Xin, H. Zhao, D. Liu, M. Q. Li, Application of deep reinforcement learning in mobile robot path planning. In: 2017 Chinese Automation Congress, pp. 7112–7116. IEEE, New York (2017).

D. L. Cruz, W. Yu, Path planning of multi-agent systems in unknown environment with neural kernel smoothing and reinforcement learning, Neurocomputing 233 (2017) 34–42.

M. Rashid, M. A. Khan, M. Alhaisoni, S. Wang, S. R. Naqvi, A. Rehman, T. Saba, A sustainable deep learning framework for object recognition using multi-layers deep features fusion and selection, Sustainability 12 (2020) 5037.

C. Gordón, P. Encalada, H. Lema, D. León, C. Castro, D. Chicaiza, Autonomous robot navigation with signaling based on objects detection techniques and deep learning networks, in Proc. IntelliSys 2019, Y. Bi, R. Bhatia, S. Kapoor (Eds.), 1038, 940–953 (2019).

X. Huang, W. Chen, W. Zhang, R. Song, J. Cheng, Y. Li, Autonomous multi-view navigation via deep reinforcement learning. In: 2021 IEEE International Conference on Robotics and Automation, pp. 13798–13804. IEEE, New York (2021).

H. Surmann, C. Jestel, R. Marchel, F. Musberg, H. Elhadj, M. Ardani, Deep reinforcement learning for real autonomous mobile robot navigation in indoor environments, Preprint arXiv:2005.13857 (2020).

M. F. R. Lee, S. H. Yusuf, Mobile robot navigation using deep reinforcement learning, Processes 10 (2022) 2748.

Downloads

Published

12-08-2024

How to Cite

Yu, R. (2024) “Research on Applications of Image Recognition in the Design of Autonomous Navigation Robots”, Transactions on Computer Science and Intelligent Systems Research, 5, pp. 1015–1021. doi:10.62051/bev20x86.