Full text

Turn on search term navigation

© 2019. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Docking technology plays a critical role in realising the long-time operation of autonomous underwater vehicles (AUVs). In this study, a binocular localisation method for AUV docking is presented. An adaptively weighted OTSU method is developed for feature extraction. The foreground object is extracted precisely without mixing or missing lamps, which is independent of the position of the AUV relative to the station. Moreover, this extraction process is more precise compared to other segmentation methods with a low computational load. The mass centre of each lamp on the binary image is used as matching feature for binocular vision. Using this fast feature matching method, the operation frequency of the binocular localisation method exceeds 10 Hz. Meanwhile, a relative pose estimation method is suggested for instances when the two cameras cannot capture all the lamps. The localisation accuracy of the distance in the heading direction as measured by the proposed binocular vision algorithm was tested at fixed points underwater. A simulation experiment using a ship model has been conducted in a laboratory pool to evaluate the feasibility of the algorithm. The test result demonstrates that the average localisation error is approximately 5 cm and the average relative location error is approximately 2% in the range of 3.6 m. As such, the ship model was successfully guided to the docking station for different lateral deviations.

Details

Title
A Fast Binocular Localisation Method for AUV Docking
Author
Zhong, Lijia; Li, Dejun; Lin, Mingwei; Lin, Ri; Yang, Canjun
Publication year
2019
Publication date
Jan 2019
Publisher
MDPI AG
e-ISSN
14248220
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2229667684
Copyright
© 2019. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.