Content area
Abstract
Deep neural networks (DNNs) significantly facilitate the performance and efficiency of the Internet of Things (IoT). However, DNNs are vulnerable to backdoor attacks where the adversary can inject malicious data during the DNN model training. Such attacks are always activated when the input is stamped with a pre-specified trigger, resulting in a pre-setting prediction of the DNN model. It is necessary to detect the backdoors whether the DNN model has been injected before implementation. Since the data come from the various data holders during the model training, it is also essential to preserve the privacy of both input data and model. In this paper, we propose a framework MP-BADNet
Details
; Zhang, Lei 1 ; Peng, Ya 1 ; Ning, Jianting 3 1 Shanghai Ocean University, College of Information Technology, Shanghai, China (GRID:grid.412514.7) (ISNI:0000 0000 9833 2433)
2 Shanghai Maritime University, College of Information Engineering, Shanghai, China (GRID:grid.412518.b) (ISNI:0000 0001 0008 0619)
3 College of Computer and Cyber Security, Fujian Normal University, Fujian Provincial Key Laboratory of Network Security and Cryptology, Fuzhou, China (GRID:grid.411503.2) (ISNI:0000 0000 9271 2478); Institute of Information Engineering, Chinese Academy of Sciences, State Key Laboratory of Information Security, Beijing, China (GRID:grid.458480.5) (ISNI:0000 0004 0559 5648)





