Content area

Abstract

With the growing demand for collaborative Unmanned Aerial Vehicle (UAV) and Unmanned Ground Vehicle (UGV) operations, precise landing of a vehicle-mounted UAV on a moving platform in complex environments has become a significant challenge, limiting the functionality of collaborative systems. This paper presents an autonomous landing perception scheme for a vehicle-mounted UAV, specifically designed for GNSS-denied environments to enhance landing capabilities. First, to address the challenges of insufficient illumination in airborne visual perception, an airborne infrared and visible image fusion method is employed to enhance image detail and contrast. Second, a feature enhancement network and region proposal network optimized for small object detection are explored to improve the detection of moving platforms during UAV landing. Finally, a relative pose and position estimation method based on the orthogonal iteration algorithm is investigated to reduce visual pose and position estimation errors and iteration time. Both simulation results and field tests demonstrate that the proposed algorithm performs robustly under low-light and foggy conditions, achieving accurate pose and position estimation even in scenarios with inadequate illumination.

Full text

Turn on search term navigation

© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.