A Visual SLAM System Based on Point-Line Fusion
DOI:
https://doi.org/10.62051/ijcsit.v8n3.15Keywords:
Visual SLAM, Point-line feature fusion, Line feature extraction, Pose estimationAbstract
To address the issues of insufficient feature points, drift in pose estimation, and poor tracking stability in pure point-based visual Simultaneous Localization and Mapping (SLAM) systems under conditions such as low texture, varying lighting, and high-speed camera motion, this paper proposes a visual SLAM system that integrates point and line features. By leveraging the strong robustness of the SuperPoint feature extractor and the strong structural capabilities of end-to-end bounding box parsing, the system compensates for feature information loss through a multi-scale feature pyramid and an adaptive feature filtering mechanism, thereby improving the accuracy of pose estimation. Multiple comparative experiments were conducted on public standard datasets such as EuRoC, Tartanair, and UMA, the results demonstrate that the proposed algorithm can stably extract sufficient features and perform continuous tracking in scenarios with weak texture, fluctuating illumination, and rapid camera motion. Compared to other mainstream systems with similar performance, the absolute trajectory error and relative pose error are significantly reduced, and the overall localization accuracy and robustness of the system are effectively improved, better meeting the autonomous localization and mapping needs of mobile robots in indoor structured environments.
Downloads
References
[1] C. Campos, R. Elvira, J. J. G. Rodríguez, J. M. M. Montiel and J. D. Tardós, "ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM," in IEEE Transactions on Robotics, vol. 37, no. 6, pp. 1874-1890, Dec. 2021, doi: 10.1109/TRO.2021.3075644.
[2] Merchán-Cruz, Emmanuel A., Samuel Moveh, Oleksandr Pasha, Reinis Tocelovskis, Alexander Grakovski, Alexander Krainyukov, Nikita Ostrovenecs, Ivans Gercevs, and Vladimirs Petrovs. 2025. "Smart Safety Helmets with Integrated Vision Systems for Industrial Infrastructure Inspection: A Comprehensive Review of VSLAM-Enabled Technologies" Sensors 25, no. 15: 4834. https://doi.org/10.3390/s25154834
[3] J. Engel, V. Koltun and D. Cremers, "Direct Sparse Odometry," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 3, pp. 611-625, 1 March 2018, doi: 10.1109/TPAMI.2017.2658577.
[4] C. Forster, M. Pizzoli and D. Scaramuzza, "SVO: Fast semi-direct monocular visual odometry," 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 2014, pp. 15-22, doi: 10.1109/ICRA.2014.6906584.
[5] G. Klein and D. Murray, "Parallel Tracking and Mapping for Small AR Workspaces," 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 2007, pp. 225-234, doi: 10.1109/ISMAR.2007.4538852.
[6] R. Mur-Artal, J. M. M. Montiel and J. D. Tardós, "ORB-SLAM: A Versatile and Accurate Monocular SLAM System," in IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147-1163, Oct. 2015, doi: 10.1109/TRO.2015.2463671.
[7] R. Gomez-Ojeda, F. -A. Moreno, D. Zuñiga-Noël, D. Scaramuzza and J. Gonzalez-Jimenez, "PL-SLAM: A Stereo SLAM System Through the Combination of Points and Line Segments," in IEEE Transactions on Robotics, vol. 35, no. 3, pp. 734-746, June 2019, doi: 10.1109/TRO.2019.2899783.
[8] H. Wen, J. Tian and D. Li, "PLS-VIO: Stereo Vision-inertial Odometry Based on Point and Line Features," 2020 International Conference on High Performance Big Data and Intelligent Systems (HPBD&IS), Shenzhen, China, 2020, pp. 1-7, doi: 10.1109/HPBDIS49115.2020.9130571.
[9] J. P. Company-Corcoles, E. Garcia-Fidalgo and A. Ortiz, "MSC-VO: Exploiting Manhattan and Structural Constraints for Visual Odometry," in IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 2803-2810, April 2022, doi: 10.1109/LRA.2022.3142900.
[10] R. Grompone von Gioi, J. Jakubowicz, J. -M. Morel and G. Randall, "LSD: A Fast Line Segment Detector with a False Detection Control," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 4, pp. 722-732, April 2010, doi: 10.1109/TPAMI.2008.300.
[11] L. Zhang and R. Koch, "An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency," Journal of Visual Communication and Image Representation, vol. 24, no. 7, pp. 794–805, 2013, doi: 10.1016/j.jvcir.2013.05.006.
[12] C. Akinlar and C. Topal, "EDLines: A real-time line segment detector with a false detection control," Pattern Recognition Letters, vol. 32, no. 13, pp. 1633–1642, 2011, doi: 10.1016/j.patrec.2011.06.001.
[13] I. Suárez, J. M. Buenaposada and L. Baumela, "ELSED: Enhanced line SEgment drawing," Pattern Recognition, vol. 127, pp. 108619, 2022, doi: 10.1016/j.patcog.2022.108619.
[14] M. Burri, J. Nikolic, P. Gohl, et al., "The EuRoC micro aerial vehicle datasets," International Journal of Robotics Research, vol. 35, no. 10, pp. 1157–1163, 2016, doi: 10.1177/0278364915620033.
[15] R. G. V. Gioi, J. Jakubowicz, J. M. Morel, et al., "LSD: A line segment detector," Image Processing On Line, vol. 2, pp. 35–55, 2012, doi: 10.5201/ipol.2012.gjmr-lsd.
[16] R. Gomez-Ojeda, F. -A. Moreno, D. Zuñiga-Noël, D. Scaramuzza and J. Gonzalez-Jimenez, "PL-SLAM: A Stereo SLAM System Through the Combination of Points and Line Segments," in IEEE Transactions on Robotics, vol. 35, no. 3, pp. 734-746, June 2019, doi: 10.1109/TRO.2019.2899783.
[17] Y. He, Z. Ji, Y. Guo et al., "PL-VIO: Tightly-Coupled Monocular Visual–Inertial Odometry Using Point and Line Features," Sensors, vol. 18, no. 4, p. 1159, 2018, doi: 10.3390/s18041159.
[18] D. DeTone, T. Malisiewicz and A. Rabinovich, "SuperPoint: Self-Supervised Interest Point Detection and Description," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2018, pp. 230-241, doi: 10.1109/CVPRW.2018.00037.
[19] K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," in Proceedings of the 3rd International Conference on Learning Representations (ICLR), San Diego, CA, USA, 2015, pp. 1–14.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 International Journal of Computer Science and Information Technology

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.







