Full Text

Turn on search term navigation

© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

With advancements in remote sensing technologies, high-resolution imagery has become increasingly accessible, supporting applications in urban planning, environmental monitoring, and precision agriculture. However, semantic segmentation of such imagery remains challenging due to complex spatial structures, fine-grained details, and land cover variations. Existing methods often struggle with ineffective feature representation, suboptimal fusion of global and local information, and high computational costs, limiting segmentation accuracy and efficiency. To address these challenges, we propose the dual-level network (DLNet), an enhanced framework incorporating self-attention and cross-attention mechanisms for improved multi-scale feature extraction and fusion. The self-attention module captures long-range dependencies to enhance contextual understanding, while the cross-attention module facilitates bidirectional interaction between global and local features, improving spatial coherence and segmentation quality. Additionally, DLNet optimizes computational efficiency by balancing feature refinement and memory consumption, making it suitable for large-scale remote sensing applications. Extensive experiments on benchmark datasets, including DeepGlobe and Inria Aerial, demonstrate that DLNet achieves state-of-the-art segmentation accuracy while maintaining computational efficiency. On the DeepGlobe dataset, DLNet achieves a 76.9% mean intersection over union (mIoU), outperforming existing models such as GLNet (71.6%) and EHSNet (76.3%), while requiring lower memory (1443 MB) and maintaining a competitive inference speed of 518.3 ms per image. On the Inria Aerial dataset, DLNet attains an mIoU of 73.6%, surpassing GLNet (71.2%) while reducing computational cost and achieving an inference speed of 119.4 ms per image. These results highlight DLNet’s effectiveness in achieving precise and efficient segmentation in high-resolution remote sensing imagery.

Details

Title
DLNet: A Dual-Level Network with Self- and Cross-Attention for High-Resolution Remote Sensing Segmentation
Author
Meng, Weijun 1   VIAFID ORCID Logo  ; Lianlei Shan 2   VIAFID ORCID Logo  ; Ma, Sugang 1   VIAFID ORCID Logo  ; Liu, Dan 3   VIAFID ORCID Logo  ; Hu, Bin 4   VIAFID ORCID Logo 

 School of Computer Science and Technology, Xi’an University of Posts and Telecommunications, Xi’an 710121, China; [email protected] (W.M.); [email protected] (S.M.) 
 School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing 101408, China; [email protected] 
 Department of Management, Kean University, Union, NJ 07083, USA; [email protected] 
 Department of Computer Science and Technology, Kean University, Union, NJ 07083, USA 
First page
1119
Publication year
2025
Publication date
2025
Publisher
MDPI AG
e-ISSN
20724292
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3188877680
Copyright
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.