Full text

Turn on search term navigation

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

In the recent pandemic, accurate and rapid testing of patients remained a critical task in the diagnosis and control of COVID-19 disease spread in the healthcare industry. Because of the sudden increase in cases, most countries have faced scarcity and a low rate of testing. Chest X-rays have been shown in the literature to be a potential source of testing for COVID-19 patients, but manually checking X-ray reports is time-consuming and error-prone. Considering these limitations and the advancements in data science, we proposed a Vision Transformer-based deep learning pipeline for COVID-19 detection from chest X-ray-based imaging. Due to the lack of large data sets, we collected data from three open-source data sets of chest X-ray images and aggregated them to form a 30 K image data set, which is the largest publicly available collection of chest X-ray images in this domain to our knowledge. Our proposed transformer model effectively differentiates COVID-19 from normal chest X-rays with an accuracy of 98% along with an AUC score of 99% in the binary classification task. It distinguishes COVID-19, normal, and pneumonia patient’s X-rays with an accuracy of 92% and AUC score of 98% in the Multi-class classification task. For evaluation on our data set, we fine-tuned some of the widely used models in literature, namely, EfficientNetB0, InceptionV3, Resnet50, MobileNetV3, Xception, and DenseNet-121, as baselines. Our proposed transformer model outperformed them in terms of all metrics. In addition, a Grad-CAM based visualization is created which makes our approach interpretable by radiologists and can be used to monitor the progression of the disease in the affected lungs, assisting healthcare.

Details

Title
COVID-Transformer: Interpretable COVID-19 Detection Using Vision Transformer for Healthcare
Author
Shome, Debaditya 1   VIAFID ORCID Logo  ; Kar, T 1   VIAFID ORCID Logo  ; Sachi Nandan Mohanty 2   VIAFID ORCID Logo  ; Tiwari, Prayag 3   VIAFID ORCID Logo  ; Khan, Muhammad 4   VIAFID ORCID Logo  ; AlTameem, Abdullah 5 ; Zhang, Yazhou 6   VIAFID ORCID Logo  ; Abdul Khader Jilani Saudagar 5   VIAFID ORCID Logo 

 School of Electronics Engineering, KIIT Deemed to be University, Odisha 751024, India; [email protected] (D.S.); [email protected] (T.K.) 
 Department of Computer Science & Engineering, Vardhaman College of Engineering (Autonomous), Hyderabad 501218, India; [email protected] 
 Department of Computer Science, Aalto University, 02150 Espoo, Finland; [email protected] 
 Visual Analytics for Knowledge Laboratory (VIS2KNOW Lab), School of Convergence, College of Computing and Informatics, Sungkyunkwan University, Seoul 03063, Korea 
 Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; [email protected] 
 Software Engineering College, Zhengzhou University of Light Industry, Zhengzhou 450001, China; [email protected] 
First page
11086
Publication year
2021
Publication date
2021
Publisher
MDPI AG
ISSN
1661-7827
e-ISSN
1660-4601
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2596027428
Copyright
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.