Full text

Turn on search term navigation

© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Multi-source domain adaptation (MSDA) in remote sensing (RS) scene classification has recently gained significant attention in the visual recognition community. It leverages multiple well-labeled source domains to train a model capable of achieving strong generalization on the target domain with little to no labeled data from the target domain. However, the distribution shifts among multiple source domains make it more challenging to align the distributions between the target domain and all source domains concurrently. Moreover, relying solely on global alignment risks losing fine-grained information for each class, especially in the task of RS scene classification. To alleviate these issues, we present a Multi-Source Subdomain Distribution Alignment Network (MSSDANet), which introduces novel network structures and loss functions for subdomain-oriented MSDA. By adopting a two-level feature extraction strategy, this model attains better global alignment between the target domain and multiple source domains, as well as alignment at the subdomain level. First, it includes a pre-trained convolutional neural network (CNN) as a common feature extractor to fully exploit the shared invariant features across one target and multiple source domains. Secondly, a dual-domain feature extractor is used after the common feature extractor, which maps the data from each pair of target and source domains to a specific dual-domain feature space and performs subdomain alignment. Finally, a dual-domain feature classifier is employed to make predictions by averaging the outputs from multiple classifiers. Accompanied by the above network, two novel loss functions are proposed to boost the classification performance. Discriminant Semantic Transfer (DST) loss is exploited to force the model to effectively extract semantic information among target and source domain samples, while Class Correlation (CC) loss is introduced to reduce the feature confusion from different classes within the target domain. It is noteworthy that our MSSDANet is developed in an unsupervised manner for domain adaptation, indicating that no label information from the target domain is required during training. Extensive experiments on four common RS image datasets demonstrate that the proposed method achieves state-of-the-art performance for cross-domain RS scene classification. Specifically, in the dual-source and three-source settings, MSSDANet outperforms the second-best algorithm in terms of overall accuracy (OA) by 2.2% and 1.6%, respectively.

Details

Title
Enhancing Cross-Domain Remote Sensing Scene Classification by Multi-Source Subdomain Distribution Alignment Network
Author
Wang, Yong 1 ; Zhehao Shu 2 ; Feng, Yinzhi 3 ; Liu, Rui 1 ; Cao, Qiusheng 1 ; Li, Danping 4   VIAFID ORCID Logo  ; Wang, Lei 5   VIAFID ORCID Logo 

 School of Electronic Engineering, Xidian University, Xi’an 710071, China; [email protected] (Y.W.); [email protected] (Y.F.); [email protected] (R.L.); [email protected] (Q.C.); [email protected] (L.W.); The 27th Research Institute of CETC, Zhengzhou 450047, China 
 Guangzhou Institute of Technology, Xidian University, Guangzhou 510555, China; [email protected] 
 School of Electronic Engineering, Xidian University, Xi’an 710071, China; [email protected] (Y.W.); [email protected] (Y.F.); [email protected] (R.L.); [email protected] (Q.C.); [email protected] (L.W.) 
 School of Telecommunications Engineering, Xidian University, Xi’an 710071, China 
 School of Electronic Engineering, Xidian University, Xi’an 710071, China; [email protected] (Y.W.); [email protected] (Y.F.); [email protected] (R.L.); [email protected] (Q.C.); [email protected] (L.W.); Guangzhou Institute of Technology, Xidian University, Guangzhou 510555, China; [email protected] 
First page
1302
Publication year
2025
Publication date
2025
Publisher
MDPI AG
e-ISSN
20724292
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3188878826
Copyright
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.