Full text

Turn on search term navigation

© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Collaboration between autonomous vehicles and drones can enhance the efficiency and connectivity of three-dimensional transportation systems. When satellite signals are unavailable, vehicles can achieve accurate localization by matching rich ground environmental data to digital maps, simultaneously providing the auxiliary localization information for drones. However, conventional digital maps suffer from high construction costs, easy misalignment, and low localization accuracy. Thus, this paper proposes a visual point cloud map (VPCM) construction and matching localization for autonomous vehicles. We fuse multi-source information from vehicle-mounted sensors and the regional road network to establish the geographically high-precision VPCM. In the absence of satellite signals, we segment the prior VPCM on the road network based on real-time localization results, which accelerates matching speed and reduces mismatch probability. Simultaneously, by continuously introducing matching constraints of real-time point cloud and prior VPCM through improved iterative closest point matching method, the proposed solution can effectively suppress the drift error of the odometry and output accurate fusion localization results based on pose graph optimization theory. The experiments carried out on the KITTI datasets demonstrate the effectiveness of the proposed method, which can autonomously construct the high-precision prior VPCM. The localization strategy achieves sub-meter accuracy and reduces the average error per frame by 25.84% compared to similar methods. Subsequently, this method’s reusability and localization robustness under light condition changes and environment changes are verified using the campus dataset. Compared to the similar camera-based method, the matching success rate increased by 21.15%, and the average localization error decreased by 62.39%.

Details

Title
Visual Point Cloud Map Construction and Matching Localization for Autonomous Vehicle
Author
Xu, Shuchen 1 ; Zhao Kedong 2   VIAFID ORCID Logo  ; Sun Yongrong 2   VIAFID ORCID Logo  ; Fu Xiyu 1   VIAFID ORCID Logo  ; Luo, Kang 1 

 College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China 
 College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China, Autonomous Control Technology of Aircraft, Engineering Research Centre of Ministry of Education, Nanjing 211106, China 
First page
511
Publication year
2025
Publication date
2025
Publisher
MDPI AG
e-ISSN
2504446X
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3233140492
Copyright
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.