Full text

Turn on search term navigation

© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

In scenarios such as nearshore and inland waterways, the ship spots in a marine radar are easily confused with reefs and shorelines, leading to difficulties in ship identification. In such settings, the conventional ARPA method based on fractal detection and filter tracking performs relatively poorly. To accurately identify radar targets in such scenarios, a novel algorithm, namely YOSMR, based on the deep convolutional network, is proposed. The YOSMR uses the MobileNetV3(Large) network to extract ship imaging data of diverse depths and acquire feature data of various ships. Meanwhile, taking into account the issue of feature suppression for small-scale targets in algorithms composed of deep convolutional networks, the feature fusion module known as PANet has been subject to a lightweight reconstruction leveraging depthwise separable convolutions to enhance the extraction of salient features for small-scale ships while reducing model parameters and computational complexity to mitigate overfitting problems. To enhance the scale invariance of convolutional features, the feature extraction backbone is followed by an SPP module, which employs a design of four max-pooling constructs to preserve the prominent ship features within the feature representations. In the prediction head, the Cluster-NMS method and α-DIoU function are used to optimize non-maximum suppression (NMS) and positioning loss of prediction boxes, improving the accuracy and convergence speed of the algorithm. The experiments showed that the recall, accuracy, and precision of YOSMR reached 0.9308, 0.9204, and 0.9215, respectively. The identification efficacy of this algorithm exceeds that of various YOLO algorithms and other lightweight algorithms. In addition, the parameter size and calculational consumption were controlled to only 12.4 M and 8.63 G, respectively, exhibiting an 80.18% and 86.9% decrease compared to the standard YOLO model. As a result, the YOSMR displays a substantial advantage in terms of convolutional computation. Hence, the algorithm achieves an accurate identification of ships with different trail features and various scenes in marine radar images, especially in different interference and extreme scenarios, showing good robustness and applicability.

Details

Title
YOSMR: A Ship Detection Method for Marine Radar Based on Customized Lightweight Convolutional Networks
Author
Kang, Zhe 1   VIAFID ORCID Logo  ; Ma, Feng 1   VIAFID ORCID Logo  ; Chen, Chen 2 ; Sun, Jie 3 

 State Key Laboratory of Maritime Technology and Safety, Wuhan University of Technology, Wuhan 430063, China; [email protected]; National Engineering Research Center for Water Transport Safety, Wuhan University of Technology, Wuhan 430063, China; Intelligent Transportation Systems Research Center, Wuhan University of Technology, Wuhan 430063, China 
 School of Computer Science and Engineering, Wuhan Institute of Technology, Wuhan 430205, China; [email protected] 
 Nanjing Smart Water Transportation Technology Co., Ltd., Nanjing 210028, China; [email protected] 
First page
1316
Publication year
2024
Publication date
2024
Publisher
MDPI AG
e-ISSN
20771312
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3098093188
Copyright
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.