Content area
This paper presents a novel unsupervised domain adaptation (UDA) framework that integrates information-theoretic principles to mitigate distributional discrepancies between source and target domains. The proposed method incorporates two key components: (1) relative entropy regularization, which leverages Kullback–Leibler (KL) divergence to align the predicted label distribution of the target domain with a reference distribution derived from the source domain, thereby reducing prediction uncertainty; and (2) measure propagation, a technique that transfers probability mass from the source domain to generate pseudo-measures—estimated probabilistic representations—for the unlabeled target domain. This dual mechanism enhances both global feature alignment and semantic consistency across domains. Extensive experiments on benchmark datasets (OfficeHome and DomainNet) demonstrate that the proposed approach consistently outperforms State-of-the-Art methods, particularly in scenarios with significant domain shifts. These results confirm the robustness, scalability, and theoretical grounding of our framework, offering a new perspective on the fusion of information theory and domain adaptation.
Details
; Song Yongjia 2 ; Liu, Xiaoyi 1
; Jiang Huangqi 3 ; Liu Shubing 4 ; Wu, Weixi 5 ; Xiang Zhiyuan 6 1 Department of Computer Science, Arizona State University, Tempe, AZ 85281, USA; [email protected] (Z.P.); [email protected] (X.L.)
2 Department of Language Science, University of California, Irvine, CA 92697, USA; [email protected]
3 Department of Computer Science, Georgia Institute of Technology, Atlanta, GA 30332, USA
4 Department of Computer Science, North Carolina at Chapel Hill, Orange, GA 27599, USA; [email protected]
5 Department of Computer Science, New York University, Brooklyn, NY 10003, USA; [email protected]
6 Department of Computer Science, University of California, San Diego, CA 92093, USA; [email protected]