Full Text

Turn on search term navigation

© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Federated learning (FL) enables collaborative model training across multiple institutions without the sharing of raw patient data, making it particularly suitable for smart healthcare applications. However, recent studies revealed that merely sharing gradients provides a false sense of security, as private information can still be inferred through gradient inversion attacks (GIAs). While differential privacy (DP) provides provable privacy guarantees, traditional DP methods apply uniform protection, leading to excessive protection for low-sensitivity data and insufficient protection for high-sensitivity data, which degrades model performance and increases privacy risks. This paper proposes a new privacy notion, sensitivity-aware differential privacy, to better balance model performance and privacy protection. Our idea is that the sensitivity of each data sample can be objectively measured using real-world attacks. To implement this new notion, we develop the corresponding defense mechanism that adjusts privacy protection levels based on the variation in the privacy leakage risks of gradient inversion attacks. Furthermore, the method extends naturally to multi-attack scenarios. Extensive experiments on real-world medical imaging datasets demonstrate that, under equivalent privacy risk, our method achieves an average performance improvement of 13.5% over state-of-the-art methods.

Details

Title
Sensitivity-Aware Differential Privacy for Federated Medical Imaging
Author
Lele, Zheng 1 ; Cao, Yang 2 ; Yoshikawa Masatoshi 3 ; Shen Yulong 4 ; Rashed, Essam A 5   VIAFID ORCID Logo  ; Taura Kenjiro 6 ; Hanaoka Shouhei 7 ; Zhang, Tao 4   VIAFID ORCID Logo 

 School of Computer Science and Technology, Xidian University, Xi’an 710126, China; [email protected] (L.Z.); [email protected] (Y.S.), Department of Computer Science, Institute of Science Tokyo, Tokyo 152-8550, Japan; [email protected] 
 Department of Computer Science, Institute of Science Tokyo, Tokyo 152-8550, Japan; [email protected] 
 Faculty of Data Science, Osaka Seikei University, Osaka 533-0007, Japan; [email protected] 
 School of Computer Science and Technology, Xidian University, Xi’an 710126, China; [email protected] (L.Z.); [email protected] (Y.S.) 
 Graduate School of Information Science, University of Hyogo, Hyogo 670-0092, Japan; [email protected] 
 Graduate School of Information Science and Technology, University of Tokyo, Tokyo 113-0033, Japan; [email protected] 
 Graduate School of Medicine, University of Tokyo, Tokyo 113-0033, Japan; [email protected] 
First page
2847
Publication year
2025
Publication date
2025
Publisher
MDPI AG
e-ISSN
14248220
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3203248059
Copyright
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.