Full Text

Turn on search term navigation

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Simple Summary

While deep learning has become a powerful tool in analysis of cancer imaging, deep learning models have potential vulnerabilities that pose security threats in the setting of clinical implementation. One weakness of deep learning models is that they can be deceived by adversarial images, which are manipulated images that have pixels intentionally perturbed to alter the output of the deep learning model. Recent research has shown that adversarial detection models can differentiate adversarial images from normal images to protect deep learning models from attack. We compared the effectiveness of different adversarial detection schemes, using three cancer imaging datasets (computed tomography, mammography, and magnetic resonance imaging). We found that that the detection schemes demonstrate strong performance overall but exhibit limited efficacy in detecting a subset of adversarial images. We believe our findings provide a useful basis in the application of adversarial defenses to deep learning models for medical images in oncology.

Abstract

Deep learning (DL) models have demonstrated state-of-the-art performance in the classification of diagnostic imaging in oncology. However, DL models for medical images can be compromised by adversarial images, where pixel values of input images are manipulated to deceive the DL model. To address this limitation, our study investigates the detectability of adversarial images in oncology using multiple detection schemes. Experiments were conducted on thoracic computed tomography (CT) scans, mammography, and brain magnetic resonance imaging (MRI). For each dataset we trained a convolutional neural network to classify the presence or absence of malignancy. We trained five DL and machine learning (ML)-based detection models and tested their performance in detecting adversarial images. Adversarial images generated using projected gradient descent (PGD) with a perturbation size of 0.004 were detected by the ResNet detection model with an accuracy of 100% for CT, 100% for mammogram, and 90.0% for MRI. Overall, adversarial images were detected with high accuracy in settings where adversarial perturbation was above set thresholds. Adversarial detection should be considered alongside adversarial training as a defense technique to protect DL models for cancer imaging classification from the threat of adversarial images.

Details

Title
Comparing Detection Schemes for Adversarial Images against Deep Learning Models for Cancer Imaging
Author
Joel, Marina Z 1 ; Avesta, Arman 2   VIAFID ORCID Logo  ; Yang, Daniel X 2 ; Jian-Ge, Zhou 3 ; Omuro, Antonio 4 ; Herbst, Roy S 5 ; Krumholz, Harlan M 6 ; Aneja, Sanjay 7   VIAFID ORCID Logo 

 Department of Dermatology, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA; Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA 
 Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA 
 Department of Chemistry, Physics and Atmospheric Science, Jackson State University, Jackson, MS 39217, USA 
 Department of Neurology, Yale School of Medicine, New Haven, CT 06510, USA 
 Department of Medicine, Yale School of Medicine, New Haven, CT 06510, USA 
 Department of Medicine, Yale School of Medicine, New Haven, CT 06510, USA; Center for Outcomes Research and Evaluation (CORE), Yale School of Medicine, New Haven, CT 06510, USA 
 Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA; Center for Outcomes Research and Evaluation (CORE), Yale School of Medicine, New Haven, CT 06510, USA 
First page
1548
Publication year
2023
Publication date
2023
Publisher
MDPI AG
e-ISSN
20726694
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2785175943
Copyright
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.