Full text

Turn on search term navigation

© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Automated water body (WB) extraction is one of the hot research topics in the field of remote sensing image processing. To address the challenges of over-extraction and incomplete extraction in complex water scenes, we propose an encoder–decoder architecture semantic segmentation network for high-precision extraction of WBs called EDWNet. We integrate the Cross-layer Feature Fusion (CFF) module to solve difficulties in segmentation of WB edges, utilizing the Global Attention Mechanism (GAM) module to reduce information diffusion, and combining with the Deep Attention Module (DAM) module to enhance the model’s global perception ability and refine WB features. Additionally, an auxiliary head is incorporated to optimize the model’s learning process. In addition, we analyze the feature importance of bands 2 to 7 in Landsat 8 OLI images, constructing a band combination (RGB 763) suitable for algorithm’s WB extraction. When we compare EDWNet with various other semantic segmentation networks, the results on the test dataset show that EDWNet has the highest accuracy. EDWNet is applied to accurately extract WBs in the Weihe River basin from 2013 to 2021, and we quantitatively analyzed the area changes of the WBs during this period and their causes. The results show that EDWNet is suitable for WB extraction in complex scenes and demonstrates great potential in long time-series and large-scale WB extraction.

Details

Title
EDWNet: A Novel Encoder–Decoder Architecture Network for Water Body Extraction from Optical Images
Author
Zhang, Tianyi 1   VIAFID ORCID Logo  ; Ji, Wenbo 1 ; Li, Weibin 1   VIAFID ORCID Logo  ; Qin, Chenhao 1 ; Wang, Tianhao 2 ; Ren, Yi 1 ; Yuan, Fang 3 ; Han, Zhixiong 4 ; Jiao, Licheng 1 

 Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi’an 710071, China; [email protected] (T.Z.); [email protected] (W.J.); [email protected] (C.Q.); [email protected] (Y.R.); [email protected] (L.J.) 
 College of Ocean and Earth Sciences, Xiamen University, Xiamen 361101, China; [email protected] 
 Shaanxi Water Development Ecological Technology R&D Co., Ltd., Xi’an 710068, China; [email protected] 
 Key Laboratory of Coal Resources Exploration and Comprehensive Utilization, Ministry of Natural Resources, Xi’an 710021, China; [email protected] 
First page
4275
Publication year
2024
Publication date
2024
Publisher
MDPI AG
e-ISSN
20724292
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3133385681
Copyright
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.