Full Text

Turn on search term navigation

© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Semantic labeling of very high-resolution remote sensing images (VHRRSI) has emerged as a crucial research area in remote sensing image interpretation. However, challenges arise due to significant variations in target orientation and scale, particularly for small targets that are more prone to obscuration and misidentification. The high interclass similarity and low intraclass similarity further exacerbate difficulties in distinguishing objects with similar color and geographic location. To address this concern, we introduce a self-cascading multiscale network (ScasMNet) based on a fully convolutional network, aimed at enhancing the segmentation precision for each category in remote sensing images (RSIs). In ScasMNet, cropped Digital Surface Model (DSM) data and corresponding RGB data are fed into the network via two distinct paths. In the encoder stage, one branch utilizes convolution to extract height information from DSM images layer by layer, enabling better differentiation of trees and low vegetation with similar color and geographic location. A parallel branch extracts spatial, color, and texture information from the RGB data. By cascading the features of different layers, the heterogeneous data are fused to generate complementary discriminative characteristics. Lastly, to refine segmented edges, fully conditional random fields (DenseCRFs) are employed for postprocessing presegmented images. Experimental findings showcase that ScasMNet achieves an overall accuracy (OA) of 92.74% on two challenging benchmarks, demonstrating its outstanding performance, particularly for small-scale objects. This demonstrates that ScasMNet ranks among the state-of-the-art methods in addressing challenges related to semantic segmentation in RSIs.

Details

Title
Semantic Labeling of High-Resolution Images Combining a Self-Cascaded Multimodal Fully Convolution Neural Network with Fully Conditional Random Field
Author
Hu, Qiongqiong 1 ; Wang, Feiting 2 ; Fang, Jiangtao 2 ; Li, Ying 1 

 School of Computer Science, Northwestern Polytechnical University, Xi’an 710129, China; [email protected] 
 Department of Computer Technology and Application, Qinghai University, Xi’ning 810016, China; [email protected] (F.W.); [email protected] (J.F.) 
First page
3300
Publication year
2024
Publication date
2024
Publisher
MDPI AG
e-ISSN
20724292
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3104036158
Copyright
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.