Full Text

Turn on search term navigation

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Underwater target detection (UTD) is one of the most attractive research topics in hyperspectral imagery (HSI) processing. Most of the existing methods are presented to predict the signatures of desired targets in an underwater context but ignore the depth information which is position-sensitive and contributes significantly to distinguishing the background and target pixels. So as to take full advantage of the depth information, in this paper a self-improving framework is proposed to perform joint depth estimation and underwater target detection, which exploits the depth information and detection results to alternately boost the final detection performance. However, it is difficult to calculate depth information under the interference of a water environment. To address this dilemma, the proposed framework, named self-improving underwater target detection framework (SUTDF), employs the spectral and spatial contextual information to pick out target-associated pixels as the guidance dataset for depth estimation work. Considering the incompleteness of the guidance dataset, an expectation-maximum liked updating scheme has also been developed to iteratively excavate the statistical and structural information from input HSI for further improving the diversity of the guidance dataset. During each updating epoch, the calculated depth information is used to yield a more diversified dataset for the target detection network, leading to a more accurate detection result. Meanwhile, the detection result will in turn contribute in detecting more target-associated pixels as the supplement for the guidance dataset, eventually promoting the capacity of the depth estimation network. With this specific self-improving framework, we can provide a more precise detection result for a hyperspectral UTD task. Qualitative and quantitative illustrations verify the effectiveness and efficiency of SUTDF in comparison with state-of-the-art underwater target detection methods.

Details

Title
A Self-Improving Framework for Joint Depth Estimation and Underwater Target Detection from Hyperspectral Imagery
Author
Jiahao Qi 1   VIAFID ORCID Logo  ; Wan, Pengcheng 2   VIAFID ORCID Logo  ; Gong, Zhiqiang 3   VIAFID ORCID Logo  ; Xue, Wei 4   VIAFID ORCID Logo  ; Yao, Aihuan 1 ; Liu, Xingyue 1   VIAFID ORCID Logo  ; Zhong, Ping 1   VIAFID ORCID Logo 

 National Key Laboratory of Science and Technology on Automatic Target Recognition, National University of Defense Technology, Changsha 410073, China; [email protected] (J.Q.); [email protected] (W.X.); [email protected] (A.Y.); [email protected] (X.L.) 
 School of Computer Science and Technology, Anhui University of Technology, Maanshan 243032, China; [email protected] 
 National Innovation Institute of Defense Technology, Chinese Academy of Military Science, Beijing 110000, China; [email protected] 
 National Key Laboratory of Science and Technology on Automatic Target Recognition, National University of Defense Technology, Changsha 410073, China; [email protected] (J.Q.); [email protected] (W.X.); [email protected] (A.Y.); [email protected] (X.L.); School of Computer Science and Technology, Anhui University of Technology, Maanshan 243032, China; [email protected] 
First page
1721
Publication year
2021
Publication date
2021
Publisher
MDPI AG
e-ISSN
20724292
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2530134152
Copyright
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.