1. Introduction
The COVID-19 disease continues to have a shattering influence on the health and well-being of the global population, caused by infection in people with a critical respiratory condition. In this regard, the World Health Organization (WHO) declared an outbreak on 30 January 2020 as a “public health emergency of global concern” [1]. A critical phase in the fight against COVID-19 is the effective and optimal screening of the infected patients, so that these patients can receive instant care and treatment, as well as be quarantined to alleviate the spread of infection.
The leading COVID-19 detection and patient screening methods include antibody detection against the SARS-CoV-2 [2], reverse transcriptase-polymerase chain reaction (RT-PCR) [3] analysis, and artificial intelligence-based detection approaches [4]. These approaches are able to identify SARS-CoV-2 RNA from respirational samples collected by a variety of resources, such as oropharyngeal or nasopharyngeal modes. Although RT-PCR testing is an industry standard and an extremely precise test. It is, however, a very time-consuming, complex, and labor-intensive process that is limited in its application. Moreover, RT-PCR sensitivity of analysis is highly inconstant and is not stated in a perfect and reliable manner [5].
Real-time RT-PCR greatly improves the detection of SARS-CoV-2, due to its simple qualitative analysis and accuracy. However, this approach is used mostly in cases where the infection of diseases like (COVID) are needed to be detected in early stages, for early infection. It is also used in cases where RT-PCR is considered to be the main method for detecting COVID-19 and SARS-CoV, respectively. In addition to all these relevant facts, the important issue associated with real-time RT-PCR test is the risk of eliciting false-negative and positive results [5,6]. For example, it was observed that many ‘suspected’ cases with typical clinical characteristics of COVID-19 and identical specific CT images are not diagnosed [6,7,8,9,10,11]. Therefore, the negative results are intentionally excluded for the possibility of a COVID-19 infection and it should not be used as the only criterion that is considered for treatment and patient management decisions. Consequently, it is reported that the combination of real-time PCR with clinical features, which could possibly help manage SARS-CoV-2 and COVID-19 outbreak. Moreover, there are many factors highlighted for these reasons that are mentioned in [12,13,14,15].
Recently, neural networks gained excessive achievements in the field of medical imaging, due to their self-learning capabilities and high aptitude for automatic feature extraction [16]. Especially, deep neural networks can distinguish infectious and virus-related pneumonia for chest radiographs [17,18,19,20,21]. Therefore, in this article, we introduce a hybrid deep neural network (HDDNs) for the diagnosis of COVID-19, using CT and X-ray images. This network classifies CT images for healthy and COVID patients, and determines the infection possibility of COVID-19. These outcomes might significantly contribute to the primary screening of COVID-19 patients.
There are numerous benefits to leveraging computed tomography images for COVID-19 screening for the universal COVID-19 epidemic. These benefits are even more relevant, specifically in remote and highly affected areas and are discussed as follows. (1) Fast triaging—computed tomography imaging facilitates fast triaging of patients doubted of COVID-19, and can be accomplished in parallel epidemiological testing, which is time consuming to provide assistance to great volumes of patients in highly affected areas. Moreover, computed tomography imaging can be quite efficient for triaging in geographical regions where patients are educated to stay home until the inception of radical symptoms. Meanwhile, anomalies are frequently seen at the time of demonstration when patients suspect that the COVID-19 reached clinical sites. (2) Accessibility and user-friendly—computed tomography imaging is available in many clinical sites and imaging centers, as it is an ordinary imaging tool in most healthcare structures and is much more readily accessible in developed countries as a cost-effective product. (3) Flexibility—the presence of flexible CT scan systems means that imaging can be executed within a quarantine room, which in turn decreases the risk of COVID-19 spread.
The fast spread and late diagnosis of COVID-19 stunned the world and influenced the lives of billions of people, from both a safety and an economic perspective. Existing testing kits are limited in number and can test only a few patients. Additionally, usage of fake testing kits in medical industry is also quite common, which not only results in waste of money but also incorrect test results. Hence, designing an automated diagnosis system is essential for providing an efficient and reliable solution. The proposed hybrid technique provides an automated detection for COVID-19 patients that can save billions of people’s lives and medical professional’s valuable time, which they invest for examining chest X-rays, to form an opinion.
The major contributions of this study that makes it unique over traditional machine-learning or deep-learning techniques are given below.
State-of-the-art hybrid COVID-19 detection by using a multi-model and multi-data approach [22,23,24]. Including multi model and multi data has its own cost, as we need more data and complex models for performing the classification task. However, they add to the efficacy of the model, as the model can exploit more rich information for the classification task. Particularly, the data from different modalities complement each other. Therefore, it can be said that this is a general phenomenon, which is also evident in many earlier studies, involving multi-model/multi-data studies [25,26].
Multimodal dataset (CT and X-rays images), which provides more accurate and reliable results in comparison to the single CT image data set or single X-ray datasets.
The hybrid deep neural network model is a mixture of two deep-learning models (LSTM + CNN) and is capable of accurately classifying COVID Patients. The proposed CNN- and LSTM-based layer arrangements show a noteworthy performance, as compared to previous deep neural network architectures, by automatically learning the patterns in the COVID-19 data, which is fruitful for the classification of COVID patients from healthy controls.
The automatic feature extraction mechanism better learns the features compared to previous COVID studies.
To the best of our knowledge, it is the first COVID-19 detection technique that simultaneously works on the multi-model and multi-data approach and gives higher accuracy in comparison to the existing COVID-19 detection techniques.
The performance comparison of the proposed HDDNs with existing COVID-19 detection techniques is shown in Table 1.
The rest of the paper is organized as follows. Section 2 describes the methodology, which is further subdivided into experimental data acquisition, preprocessing, and the hybrid deep neural network (HDNNs) architecture. Section 3 describes the experimental results of the proposed technique that further elaborates the quantitative, qualitative analysis, and comparison of the proposed HDNNs model, with the existing techniques. In the end, the conclusion section reveals the potential of the hybrid deep neural network (HDNNs), by concluding the different performance parameters and accuracy comparison.
2. Methodology and Deliverables
2.1. Experimental Data Acquisition
In this study, the chest X-ray images dataset that we refer to as “Hybrid-COVID” with dimensions of (1080 × 1080 pixels) was used to test and train our hybrid deep neural network architecture (HDNNs). This was designed by extracting the COVID-19 data from five different sources—GitHub [41], COVID-19 radiography database [42], Kaggle [43], COVID-19 image data collection [44], and Actual Med COVID-19 Chest X-ray Dataset [45], which are open-source and publicly available data repositories.
Before further usage, we combined all 5 datasets into a single dataset that consisted of 5000 patients (57% male, 32% female), containing 3500 infected and 1500 healthy controls, with an age group of 38–55 years. The selection of these five databases to create “Hybrid-COVID” was directed by the fact that all these five databases are open source and fully available to the clinicians, research community, and general public, and fulfills the diagnostic criteria of COVID-19, defined by the World Health Organization (WHO).
2.2. Preprocessing
Noise always exists in digital images and it is most challenging to remove noise without previous knowledge of filtering techniques. The acquired COVID-19 data were polluted with different types of noise. This noise occurred due to many reasons, including monitoring devices, patient movement, and device error. It is necessary to sanitize these data from noise because it affects the classification accuracy of the model. To remove these lower quality data from actual images, we analyzed the results of several researchers. An iterative mean filter for image denoising was used in [46], which was based on the LMS (least mean squares) algorithm and decreased the noise in digital image processing. Rai et al. [47] endorsed the use of a hybrid adaptive algorithm, based on wavelet transform and independent component analysis for denoising of MRI images, and for efficient suppression of the interference in images. A general model for noise contamination can be described by Equation (1).
P (n) = Q (n) + T (r)(1)
where P (n) and Q (n) are samples of the COVID data, including and excluding noise, respectively; r represents the source inference, and T is an unknown transfer function.The Kalman filter [9] is an efficient recursive data processing algorithm, which was extensively used in many applications, such as industrial control systems, radar tracking, aero-engine analysis, and intellectual robots. Kalman filter works well in reducing noise, while preserving the underlying structure of an image, when compared to the other said filter. We used it in our study because the Kalman filtering method recursively uses past data and gives more accurate results than a filtering method based only on incoming measurements. In comparison to [48,49], which used deep-learning approach to denoise the CT images and is well-developed for medical image denoising, we used the Kalman filter in our article due to the multi-imaging data, recursive data processing, prior predicted value observation, and polluted region detection, which gives acceptable performance using the Kalman filter. In the Kalman filter derivation formula, joins an Adaptive Predictor Filter (APF) and Discrete Wavelet Transformation (DWT) to identify pure noise. Moreover, our other approaches like [50,51,52,53] also support our words.
The noise removal model proposed in the present study included the following steps—(1) image decomposition, (2) noise detection (RA) region detection, (3) polluted parts prediction, and (4) image renovation. The DWT was used to decompose the images and identify the regions. The DWT decomposition finds the low-frequency parts and nonstationary time-series, which are then divided into several approximate stationary time-series. The actual image is predicted by the decomposed images, by applying the conventional Kalman filter. The adaptive filter is applied to improve prediction and to estimate future values based on the previous one.
We used the following Kalman discrete-time model to remove the noise with the state Equation (2).
(2)
and the analysis equation(3)
where is the state variable, is the analysis variable, and are matrices with n number of rows and m number of columns, is the modeling error noise and is the analysis error noise, respectively.We only focus on the , which is Gaussian noise, and , which is a non-Gaussian noise.
By considering Equations (2) and (3), the Kalman filtering model supposes that and are both Gaussians, with the following covariance matrix:
(4)
(5)
Let − be the priority estimation, which is the approximation of from m0, m1, ⋯, mk−1, and let be the posterior estimation, which is the approximation of from m0, m1, ⋯, mk.
(6)
and where E signifies expectation and is from (2)(7)
The Kalman filter supposes that the posterior estimation is expressed as the prior estimate corrected by the measurement data:
(8)
for n rows and m columns matrix Kk represents the Kalman gain.Note that is from (3).
The Kalman gain Kk is resolved by decreasing E ( − )2
Note that
(9)
where Tr signifies the trace operator and the n cross n covariance matrix Zk is presented as follows(10)
Putting (9) into (10), we get
(11)
By (10) and (12), can be simplified as
(12)
Combining (9), (10), and (11), thus, we filter the image from the conventional Kalman filter.
Finally, noise that had a high effect on the chest X-ray-based COVID data, was removed from the raw images, and the data were ready for further processing.
2.3. Proposed Hybrid Deep Neural Network Architecture (HDNNs)
In this study, a hybrid deep neural network architecture based on a convolutional neural network (CNN) and LSTM (Long short-term memory) for COVID detection is proposed. These two models are deep learning-based models, which is a sub-field of artificial intelligence. The two deep learning models are well-suited to classifying, processing, and making predictions, which resulted in extraordinary performance in automatic feature extraction from images datasets [23,26]. We picked the CNN model, because of their automated feature learning, and used LSTM to deal with the vanishing gradient problem, which occurs when training neural networks. The proposed HDNN architecture is based on the three distinctive arrangements of diverse layers (convolutional layers, pooing layers, and dropout layers), and one LSTM layer that was assessed on the COVID-19 datasets used in this article. The convolutional layer was the major building block of CNN and was used to filter out the discriminating features from the original images. The pooling layer was utilized to reduce the dimensionality of the data by using the sliding window approach, which is based on the size of the window. The dropout layer was used to prevent a model from over-fitting.
The proposed layer arrangements showed a noteworthy performance, as compared to the previous deep neural network architectures, by automatically learning the patterns in COVID-19 data that is fruitful for the classification of COVID patients from healthy controls.
The mathematical representation of the proposed model and their layer combination is stated below.
Convolutional Layer:
(13)
where w = image, z = kernel, p and q are the indices of rows and columns of the resultant matrix.Equation (14) describes the mathematical functionality of how the feature detector shifts according to the input.
Convolution function:
(14)
where t is the time index and is an integer, w and z are integers.A common engineering convention is:
(15)
This research was implemented in Python by applying a hybrid deep neural network, referred to as (HDNNs), at the X-ray and computed tomography (CT) images. CT is a non-invasive imaging approach that has a capability to capture specific conditions in the lungs that are associated with COVID-19. Thus, we analyzed it by the most appropriate deep neural network approach, which is an effective tool for the primary analysis of COVID-19. Artificial intelligence using deep neural networks already attained greater performance in the field of radiology [10]. Past research effectively applied survey-based and transcriptase-polymerase chain reaction methods, to identify pneumonia in pediatric chest radiographs, to distinguish pathological and bacteriological pneumonia in 2D pediatric chest radiographs [11].
In this article, HDNNs is applied to computed tomography (CT) [12], which achieved a higher classification accuracy than the other existing techniques in the literature. The framework of HDNNs is shown in Figure 1. This framework was trained by using the transfer learning approach that automatically extended from previous training and then reused it in further diagnosis. The infection probability of COVID-19 was formulated using two major Python libraries Keras and Tensor Flow. Ultimately, the chest, CT, and HDNNs provide a consistent and fast methodology for the identification of COVID-19 patients. The block-level representation of our proposed technique by using the hybrid deep neural network (HDNNs) and chest X-ray is shown in Figure 2.
The proposed hybrid deep neural network divides the COVID data with the 1-s window size and 256 samples, by taking the data as a time-series format. As the sampling rate was 256 samples per second, every COVID fragment enclosed 256 data points (window length). Empirical evaluation was done for the selection of window size and it was observed that a window size of 1 s gave significant results. The input data dimension of COVID datasets is set to 256 × 64 for every instance of class. Furthermore, the input COVID data are segmented into a training and testing set, with a ratio of 80 and 20 percent, respectively. Initially, the training dataset was passed to the hybrid deep neural network models for the classification of COVID and healthy subjects, and then the testing datasets was applied to evaluate the classifier performance, using several performance metrics like accuracy, precision, recall, and F1-score.
Evaluation Criteria
The four different metrics were used to evaluate the proposed method. These metrics were accuracy, precision, recall, and F1 score.
The mathematical representation of the performance metrics is shown below.
Accuracy = (tn + tp)/(tp + fn + fp + tn)(16)
Precision = tp/(tp + fp)(17)
Recall = tp/(tp + fn)(18)
F1 = 2 × Precision × recall/precision + recall(19)
where “RN” refers to true positive, “tn” shows true negative, “fp” represents false positive, and “fn” represents false negative.2.4. Potential Risk Imperial to the Development of Progress & Related Risk Strategy
The potential risks that we faced during development were finding a balance between sensitivity and specificity, which was an incredible challenge, because infective diseases like COVID-19 transfer quickly.
The implementation flow of data collection and deliverable is represented in Figure 3.
3. Experimental Results
To estimate the effectiveness of the proposed HDNNs, we executed both quantitative and qualitative analysis, to develop a good understanding of its identification and decision-making behavior.
3.1. Quantitative Analysis
To examine the proposed HDDNs performance in a quantitative manner, we calculated the test accuracy, as well as a positive predictive value (PPV) and sensitivity for each type of contamination, on the above-mentioned COVID-19 X-ray dataset. The test sensitivity and positive predictive value (PPV) ratio for normal, non-COVID (Pneumonia), and COVID patients, along with the applied architecture, are shown in Table 2 and Table 3, respectively. The results showed that HDDNs attained a good test accuracy (99%) for detecting COVID-19 patients, consequently emphasizing the effectiveness of leveraging a human-machine cooperative design scheme for making highly-customized deep neural network architectures. The performance of the proposed HDDNs model for COVID-19 detection was also evaluated with the help of a confusion matrix, which is often used to evaluate the accuracy of machine-learning classifiers. It consists of a set of rows and tables in which each row of the confusion matrix shows the number of instances in the predicted class, while the columns represent the number of instances in an actual class or vice-versa.
3.2. Qualitative Analysis
This section presents the detailed data distribution used for the proposed HDDNs framework, to get a better understanding of how HDDNs make decisions. It authenticates whether it is making recognition decisions, based on significant information (data) or on inaccurate information, i.e., biased decisions based on inappropriate data. Such situations are very problematic and difficult to track. A dataset of over 5000 COVID patients was used in this study. The data distribution was analyzed to train and test the X-ray and CT images. The distribution of the X-ray images for COVID-19 detection is shown in the first half of Table 4. Similarly, the distribution of CT images for COVID-19 detection is shown in the second half of Table 4. The training and testing images for all 3 categories (normal, pneumonia, and COVID-19) are shown separately. It can be seen from the table that almost 80% of the data was used for training and almost 20% of the data was used for testing. The evaluation of the training performance of the hybrid deep neural network for the COVID-19 dataset was also conducted by importing the python library Keras, and training loss and accuracy of the COVID-19 dataset was also measured for tracking the training performance. The resultant output of the proposed method is presented in the form of a confusion matrix in Figure 4.
3.3. Comparison of HDNNs with the Existing COVID-19 Detection Techniques
To make a comparison of our state of the art HDDN’s approach with the existing COVID-19 detection techniques and to prove the originality of our work, we selected the deep neural network (DNN’s) approach as the benchmark. First, we evaluated both techniques at the raw CT and X-ray images to calculate the loss that depicts the inaccuracy of the model result, and wrongly classifies the presence of disease that does not exist in reality. The graphical behavior of DNN and HDNNs is shown in Figure 5, against the number of COVID-19 CT and chest X-ray samples. The classification accuracy of both the DNN and HDNNs neural network models is analyzed in Figure 6. The hybrid neural network (HDNNs) model with long short-term memory (LSTM) led the DNN to have a 99% classification accuracy.
4. Conclusions
This article revealed the potential of a hybrid deep neural network (HDNNs) for the automatic diagnosis of COVID at computed tomography and chest X-ray data. The benefit of the proposed HDNNs over the traditional deep learning and machine learning frameworks is the use of multi model and multi data. After performing the analysis at the COVID-19 X-ray datasets by using the hybrid deep neural network and computer tomography (CT), it was concluded that the hybrid deep neural network could accurately identify COVID-19 and discriminate it from patients with pneumonia. It showed excellent sensitivity for identification of COVID-19. In comparison to previous techniques used for COVID detection, our proposed model HDDNs had a 99% classification accuracy. In future, we believe that it will prove to be an essential tool for COVID-19 identification in endemic areas.
M.I., U.D., S.Y., M.A.I. performed the literature review, project management, resource, structuring and review of results, funding acquisition, and writing of the paper draft. S.B. performed the editing and restructuring of paper. F.A., A.G., and A.S.A. performed the project management. S.Y., S.R., and T.A. performed data analysis, manuscript review, and editing. U.D., S.H., F.A. performed the editing of the paper and resource management. All authors have read and agreed to the published version of the manuscript.
This research work is funded by the Ministry of Education and the Deanship of Scientific Research, Najran University. Kingdom of Saudi Arabia, under code number NU/ESCI/18/006.
This article does not contain any implementation of clinically data of human or animal.
This article does not contain any studies with human participants or animals performed by any of the authors.
This study does not report any data which required external approval.
The authors acknowledge the support from the Ministry of Education and the Deanship of Scientific Research, Najran University. Kingdom of Saudi Arabia, under code number NU/ESCI/18/006.
The authors declare that they have no conflict of interest.
Authors received ethical approval from the ethical committee of the deanship of scientific research, Najran University, Saudi Arabia.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 1. Hybrid Deep Neural network (HDNNs) architecture for COVID-19 detection consists of a dropout layer (DL), a convolutional layer (CL), a pooling layer (PL) with LSTM blocks, and a fully connected (FC) layer.
Figure 2. The block level representation of our proposed technique by using hybrid deep neural network (HDNNs) and chest X-ray.
Figure 4. The 5-fold confusion matrix results of the multi-class classification task. (a) Overlapped Confusion Matrix, (b) 1-Fold Confusion Matrix (CM), (c) 2-Fold Confusion Matrix (CM), (d) 3-Fold Confusion Matrix (CM), (e) 4-Fold Confusion Matrix (CM), and (f) 5-Fold Confusion Matrix (CM).
Figure 5. Loss of COVID-19 dataset against the number of COVID-19 CT and chest X-ray samples for the deep neural network (DNNs) and hybrid deep neural network (HDDNs).
Figure 6. Accuracy of COVID-19 dataset against the number of COVID-19 CT and chest X-ray samples for the deep neural network (DNNs) and hybrid deep neural network (HDDNs).
Performance comparison of existing COVID-19 detection techniques with HDDNs, in which the shaded area represents the chest X-rays-based techniques that are used as a benchmark for this study.
Authors | Published | Technique Summary | Performance |
---|---|---|---|
Xiao, L., et al. [ |
31 July 2020 | Artificial intelligence-assisted tool using computed tomography (CT) imaging to predict disease severity. | Accuracy: 81.9% |
Li et al. [ |
19 March 2020 | Artificial intelligence approach with chest X-ray | Per-scan sensitivity and specificity: 87% and 92% |
Dansana, D. et al. [ |
28 August 2020 | CNN based methods using CT and X-ray images | Validation accuracy: (91%) |
Chen, J., et al. [ |
1 March 2020 | Deep Learning and CT images based method for COVID detection | Accuracy: 95.24%, |
Zhang et al. [ |
28 June 2020 | Deep learning with chest X-ray | Accuracy: 83.61% and sensitivity: 71.70% |
Zhang, K., et al. [ |
3 September 2020 | AI system to diagnose COVID-19 pneumonia using CT scans | Accuracy: 80% |
Narin, et al. [ |
12 July 2020 | deep CNN using X-ray images | Accuracy: 98% |
Acar, E., et al. [ |
14 June 2020 | Deep learning-based models for detecting COVID-19 from computed tomography (CT) images | Accuracy: 98.8% |
Ozturk et al. [ |
18 June 2020 | Deep Neural network with X-ray images | Accuracy: 98.08% and 87.02% for binary and multi-classes, respectively |
Soares, L., et al. [ |
2 July 2020 | Automatic Detection of COVID-19 Cases on X-ray images Using Convolutional Neural Networks | Accuracy 81% |
Goel, C., et al. [ |
17 August 2020 | Deep Network Architecture for COVID-19 Detection Using Computed Tomography Images | Accuracy 96.78% |
Afshar, P., et al. [ |
28 September 2020 | COVID-19 Computed Tomography (CT) Scan using Machine Learning and Deep Learning | Accuracy 91% |
Song, Y., et al. [ |
25 February 2020 | Deep learning-based CT diagnosis system | Accuracy: 0.99 and sensitivity: 0.96 |
Shah, V., et al. [ |
11 July 2020. | Diagnosis of COVID-19 using CT scan images and deep learning techniques | Accuracy: 94.52% |
Our Study | 10 January 2021 | Hybrid Deep Neural Networks (HDNNs), CT images and Chest X-rays for the detection of COVID-19 | Classification accuracy: 99% |
Sensitivity for Normal, Pneumonia Patient, and COVID Patient.
Sensitivity | |||
---|---|---|---|
Neural Network Architecture | No Findings | Pneumonia Patient | COVID-19 Patient |
Recurrent Neural Networks (RNN) | 78% | 80.5% | 81.4% |
Deep Belief Networks (DBNs) | 82.3% | 84% | 83.0 |
Deep Neural Network (DNNs) | 81.5% | 86.7% | 87% |
Hybrid Deep Neural Network (HDNNs) | 88.1% | 99.5% | 99% |
Positive predictive value (PPV) for each infection type.
Positive Predictive Value (PPV) | |||
---|---|---|---|
Neural Network Architecture | No Findings | Pneumonia Patient | COVID-19 Patient |
Recurrent Neural Networks (RNN) | 68.1% | 70.5% | 51.4% |
Deep Belief Networks (DBNs) | 72.3% | 74% | 75.0 |
Deep Neural Network (DNNs) | 81% | 84.7% | 86% |
Hybrid Deep Neural Network (HDNNs) | 89.% | 96.5% | 98.7% |
Distribution of X-ray and CT images for different contamination types.
Subject Type | Number of Images (X-ray) | |
---|---|---|
Training | Testing | |
Normal | 300 | 200 |
Pneumonia | 800 | 200 |
COVID-19 | 1000 | 200 |
Number of Images (CT) | ||
Normal | 400 | 200 |
Pneumonia | 500 | 200 |
COVID-19 | 800 | 200 |
References
1. World Health Organization. Available online: https://www.who.int/emergencies/en/ (accessed on 10 July 2020).
2. Xiang, F.; Wang, X.; He, X.; Peng, Z.; Yang, B.; Zhang, J.; Zhou, Q.; Ye, H.; Ma, Y.; Li, H. et al. Antibody detection and dynamic characteristics in patients with COVID-19. Clin. Infect. Dis.; 2020; 71, pp. 1930-1934. [DOI: https://dx.doi.org/10.1093/cid/ciaa461]
3. Ai, T.; Yang, Z.; Hou, H.; Zhan, C.; Chen, C.; Lv, W.; Tao, Q.; Sun, Z.; Xia, L. Correlation of Chest CT and RT-PCR Testing for Coronavirus Disease 2019 (COVID-19) in China: A Report of 1014 Cases. Radiology; 2020; 296, pp. E32-E40. [DOI: https://dx.doi.org/10.1148/radiol.2020200642]
4. Li, L.; Qin, L.; Xu, Z.; Yin, Y.; Wang, X.; Kong, B.; Bai, J.; Lu, Y.; Fang, Z.; Song, Q. et al. Artificial intelligence distinguishes COVID-19 from community acquired pneumonia on chest CT. Radiology; 2020; 200905. [DOI: https://dx.doi.org/10.1148/radiol.2020200905]
5. Ye, H.; Gao, F.; Yin, Y.; Guo, D.; Zhao, P.; Lu, Y.; Wang, X.; Bai, J.; Cao, K.; Song, O. et al. Precise diagnosis of intracranial hemorrhage and subtypes using a three-dimensional joint convolutional and recurrent neural network. Eur. Radiol.; 2019; 29, pp. 6191-6201. [DOI: https://dx.doi.org/10.1007/s00330-019-06163-2] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31041565]
6. Tahamtan, A.; Ardebili, A. Real-time RT-PCR in COVID-19 detection: Issues affecting the results. Expert Rev. Mol. Diagn.; 2020; 20, pp. 453-454. [DOI: https://dx.doi.org/10.1080/14737159.2020.1757437] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32297805]
7. Gold, M.C.A.; Work, S.C.S. Role of Rt-Pcr in COVID 19 diagnosis. J. Seybold Rep. ISSN NO; 2020; 25, pp. 451-456.
8. Bahreini, F.; Najafi, R.; Amini, R.; Khazaei, S.; Bashirian, S. Reducing False Negative PCR Test for COVID-19. Int. J. MCH AIDS (IJMA); 2020; 9, pp. 408-410. [DOI: https://dx.doi.org/10.21106/ijma.421]
9. Subirana, B.; Hueto, F.; Rajasekaran, P.; Laguarta, J.; Puig, S.; Malvehy, J.; Mitja, O.; Trilla, A.; Moreno, C.I.; Valle, J.F.M. et al. Hi sigma, do I have the coronavirus? Call for a new artificial intelligence approach to support health care professionals dealing with the covid-19 pandemic. arXiv; 2020; arXiv: 2004.06510
10. Achdout, H.; Aimon, A.; Bar-David, E.; Barr, H.; Ben-Shmuel, A.; Bennett, J.; Bobby, M.L.; Brun, J.; Sarma, B.; Calmiano, M. et al. COVID Moonshot Consortium. COVID moonshot: Open science discovery of SARS-CoV-2 main protease inhibitors by combining crowdsourcing, high-throughput experiments, computational simulations, and machine learning. bioRxiv; 2020; [DOI: https://dx.doi.org/10.1101/2020.10.29.339317]
11. Long, J.B.; Ehrenfeld, J.M. The Role of Augmented Intelligence (AI) in Detecting and Preventing the Spread of Novel Coronavirus. J. Med. Syst.; 2020; 44, pp. 1-2. [DOI: https://dx.doi.org/10.1007/s10916-020-1536-6]
12. Aminololama-Shakeri, S.; López, J.E. The Doctor-Patient Relationship with Artificial Intelligence. Am. J. Roentgenol.; 2019; 212, pp. 308-310. [DOI: https://dx.doi.org/10.2214/AJR.18.20509]
13. Cao, Y.; Jiang, H. Study on Jingdong Company’s Emergency Supply chain in the Context of Unconventional Emergency of Novel Coronavirus Pneumonia. Proceedings of the International Conference on New Energy Technology and Industrial Development (NETID 2020), EDP Sciences; Dali, China, 18–20 December 2021; Volume 235, 03026.
14. Kong, B.; Wang, X.; Bai, J.; Lu, Y.; Gao, F.; Cao, K.; Xia, J.; Song, Q.; Yin, Y. Learning tree-structured representation for 3D coronary artery segmentation. Comput. Med. Imaging Graph.; 2020; 80, 101688. [DOI: https://dx.doi.org/10.1016/j.compmedimag.2019.101688]
15. Thanh, D.N.H.; Engínoğlu, S. An iterative mean filter for image denoising. IEEE Access; 2019; 7, pp. 167847-167859.
16. Rai, H.M.; Chatterjee, K. Hybrid adaptive algorithm based on wavelet transform and independent component analysis for denoising of MRI images. Measurement; 2019; 144, pp. 72-82. [DOI: https://dx.doi.org/10.1016/j.measurement.2019.05.028]
17. Fan, F.; Shan, H.; Kalra, M.K.; Singh, R.; Qian, G.; Getzin, M.; Teng, Y.; Hahn, J.; Wang, G. Quadratic Autoencoder (Q-AE) for Low-Dose CT Denoising. IEEE Trans. Med. Imaging; 2019; 39, pp. 2035-2050. [DOI: https://dx.doi.org/10.1109/TMI.2019.2963248] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31902758]
18. Bayoudh, K.; Hamdaoui, F.; Mtibaa, A. Hybrid-COVID: A novel hybrid 2D/3D CNN based on cross-domain adaptation approach for COVID-19 screening from chest X-ray images. Phys. Eng. Sci. Med.; 2020; 43, pp. 1415-1431. [DOI: https://dx.doi.org/10.1007/s13246-020-00957-1]
19. Kassani, S.H.; Kassasni, P.H.; Wesolowski, M.J.; Schneider, K.A.; Deters, R. Automatic detection of coronavirus disease (covid-19) in x-ray and ct images: A machine learning-based approach. arXiv; 2020; arXiv: 2004.10641
20. Nair, R.; Vishwakarma, S.; Soni, M.; Patel, T.; Joshi, S. Detection of COVID-19 cases through X-ray images using hybrid deep neural network. World J. Eng.; 2021; [DOI: https://dx.doi.org/10.1108/WJE-10-2020-0529]
21. Zhang, D.; Wang, Y.; Zhou, L.; Yuan, H.; Shen, D. Alzheimer’s Disease Neuroimaging Initiative. Multimodal classification of Alzheimer’s disease and mild cognitive impairment. Neuroimage; 2011; 55, pp. 856-867. [DOI: https://dx.doi.org/10.1016/j.neuroimage.2011.01.008]
22. Khan, M.A.; Ashraf, I.; Alhaisoni, M.; Damaševičius, R.; Scherer, R.; Rehman, A.; Bukhari, S.A.C. Multimodal brain tumor classification using deep learning and robust feature selection: A machine learning application for radiologists. Diagnostics; 2020; 10, 565. [DOI: https://dx.doi.org/10.3390/diagnostics10080565]
23. Abbasi, A.B.; Dumanian, J.; Okum, S.; Nwaudo, D.; Lee, D.; Prakash, P.; Bendix, P. Association of a New Trauma Center With Racial, Ethnic, and Socioeconomic Disparities in Access to Trauma Care. JAMA Surg.; 2020; 156, pp. 97-99. [DOI: https://dx.doi.org/10.1001/jamasurg.2020.4998]
24. Pan, J.; Yang, X.; Cai, H.; Mu, B. Image noise smoothing using a modified Kalman filter. Neurocomputing; 2016; 173, pp. 1625-1629. [DOI: https://dx.doi.org/10.1016/j.neucom.2015.09.034]
25. Hosny, A.; Parmar, C.; Quackenbush, J.; Schwartz, L.H.; Aerts, H.J. Artificial intelligence in radiology. Nat. Rev. Cancer; 2018; 18, pp. 500-510. [DOI: https://dx.doi.org/10.1038/s41568-018-0016-5] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29777175]
26. Hilmes, M.A.; Dunnavant, F.D.; Singh, S.P.; Ellis, W.D.; Payne, D.C.; Zhu, Y.; Griffin, M.R.; Edwards, K.M.; Williams, J.V. Chest radiographic features of human metapneumovirus infection in pediatric patients. Pediatr. Radiol.; 2017; 47, pp. 1745-1750. [DOI: https://dx.doi.org/10.1007/s00247-017-3943-5] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28831577]
27. Heumann, J.M. Computed Tomography. U.S. Patent; No. 6,765,981, 20 July 2004.
28. Cohen, J.P.; Morrison, P.; Dao, L. COVID-19 image data collection. arXiv; 2020; arXiv: 2003.11597
29. Radiological Society of North America. COVID-19 Radiography Database. 2019; Available online: https://www.kaggle.com/tawsifurrahman/covid19-radiography-database (accessed on 12 January 2021).
30. Open Database of COVID-19 Cases with Chest X-Ray or CT Images. 2020; Available online: https://github.com/ieee8023/covid-chestxray-dataset (accessed on 10 December 2020).
31. Kaggle: Corona Hack -Chest X-Ray-Dataset. Available online: https://www.kaggle.com/praveengovi/coronahack-chest-xraydataset (accessed on 10 December 2020).
32. Chung, A. Actualmed COVID-19 Chest X-Ray Data Initiative. 2020; Available online: https://github.com/agchung/Actualmed-COVID-chestxray-dataset (accessed on 10 December 2020).
33. Narin, A.; Kaya, C.; Pamuk, Z. Automatic detection of coronavirus disease (covid-19) using x-ray images and deep convolutional neural networks. arXiv; 2020; arXiv: 2003.10849
34. Acar, E.; Şahin, E.; Yilmaz, İ. Improving effectiveness of different deep learning-based models for detecting COVID-19 from computed tomography (CT) images. medRxiv; 2020; [DOI: https://dx.doi.org/10.1101/2020.06.12.20129643]
35. Ozturk, T.; Talo, M.; Yildirim, E.A.; Baloglu, U.B.; Yildirim, O.; Acharya, U.R. Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput. Biol. Med.; 2020; 121, 103792. [DOI: https://dx.doi.org/10.1016/j.compbiomed.2020.103792]
36. Soares, L.P.; Soares, C.P. Automatic detection of covid-19 cases on x-ray images using convolutional neural networks. arXiv; 2020; arXiv: 2007.05494
37. Goel, C.; Kumar, A.; Dubey, S.K.; Srivastava, V. Efficient Deep Network Architecture for COVID-19 Detection Using Computed Tomography Images. medRxiv; 2020; [DOI: https://dx.doi.org/10.1101/2020.08.14.20170290]
38. Afshar, P.; Heidarian, S.; Enshaei, N.; Naderkhani, F.; Rafiee, M.J.; Oikonomou, A.; Fard, F.B.; Plataniotis, K.N.; Mohammadi, A. COVID-CT-MD: COVID-19 Computed Tomography (CT) Scan Dataset Applicable in Machine Learning and Deep Learning. arXiv; 2020; arXiv: 2009.14623
39. Song, Y.; Zheng, S.; Li, L.; Zhang, X.; Zhang, X.; Huang, Z.; Chen, J.; Zhao, H.; Wang, R.; Chong, Y. et al. Deep learning enables accurate diagnosis of novel coronavirus (COVID-19) with CT images. medRxiv; 2020; [DOI: https://dx.doi.org/10.1101/2020.02.23.20026930]
40. Shah, V.; Keniya, R.; Shridharani, A.; Punjabi, M.; Shah, J.; Mehendale, N. Diagnosis of COVID-19 using CT scan images and deep learning techniques. medRxiv; 2021; [DOI: https://dx.doi.org/10.1101/2020.07.11.20151332]
41. Xiao, L.S.; Li, P.; Sun, F.; Zhang, Y.; Xu, C.; Zhu, H.; Cai, F.-Q.; He, Y.-L.; Zhang, W.-F.; Ma, S.-C. et al. Development and Validation of a Deep Learning-Based Model Using Computed Tomography Imaging for Predicting Disease Severity of Coronavirus Disease 2019. Front. Bioeng. Biotechnol.; 2020; 8, 898. [DOI: https://dx.doi.org/10.3389/fbioe.2020.00898]
42. Dansana, D.; Kumar, R.; Bhattacharjee, A.; Hemanth, D.J.; Gupta, D.; Khanna, A.; Castillo, O. Early diagnosis of COVID-19-affected patients based on X-ray and computed tomography images using deep learning algorithm. Soft Comput.; 2020; pp. 1-9. [DOI: https://dx.doi.org/10.1007/s00500-020-05275-y] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32904395]
43. Chen, J.; Wu, L.; Zhang, J.; Zhang, L.; Gong, D.; Zhao, Y.; Chen, Q.; Huang, S.; Yang, M.; Hu, S. et al. Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography: A prospective study. MedRxiv; 2020; 10, pp. 1-11.
44. Zhang, J.; Xie, Y.; Li, Y.; Shen, C.; Xia, Y. Covid-19 screening on chest x-ray images using deep learning based anomaly detection. arXiv; 2020; arXiv: 2003.12338
45. Zhang, K.; Liu, X.; Shen, J.; Li, Z.; Sang, Y.; Wu, X.; Zha, Y.; Liang, W.; Wang, C.; Wang, K. et al. Clinically applicable AI system for accurate diagnosis, quantitative measurements, and prognosis of covid-19 pneumonia using computed tomography. Cell; 2020; 181, pp. 1423-1433. [DOI: https://dx.doi.org/10.1016/j.cell.2020.04.045]
46. Shan, H.; Padole, A.; Homayounieh, F.; Kruger, U.; Khera, R.D.; Nitiwarangkul, C.; Kalra, M.K.; Wang, G. Competitive performance of a modularized deep neural network compared to commercial algorithms for low-dose CT image reconstruction. Nat. Mach. Intell.; 2019; 1, pp. 269-276. [DOI: https://dx.doi.org/10.1038/s42256-019-0057-9] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33244514]
47. Shan, H.; Zhang, Y.; Yang, Q.; Kruger, U.; Kalra, M.K.; Sun, L.; Cong, W.; Wang, G. 3-D Convolutional Encoder-Decoder Network for Low-Dose CT via Transfer Learning From a 2-D Trained Network. IEEE Trans. Med. Imaging; 2018; 37, pp. 1522-1534. [DOI: https://dx.doi.org/10.1109/TMI.2018.2832217] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29870379]
48. Ali, G.; Ali, A.; Ali, F.; Draz, U.; Majeed, F.; Yasin, S.; Ali, T.; Haider, N. Artificial Neural Network Based Ensemble Approach for Multicultural Facial Expressions Analysis. IEEE Access.; 2020; 8, pp. 134950-134963. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3009908]
49. Draz, U.; Ali, T.; Yasin, S. Towards Pattern Detection of Proprotein Convertase Subtilisin/kexin type 9 (PCSK9) Gene in Bioinformatics Big Data. NFC IEFR J. Eng. Sci. Res.; 2018; 6, pp. 160-165.
50. Ali, T.; Yasin, S.; Draz, U.; Ayaz, M.; Tariq, T.; Javaid, S. Motif Detection in Cellular Tumor p53 Antigen Protein Sequences by using Bioinformatics Big Data Analytical Techniques. Int. J. Adv. Comput. Sci. Appl.; 2018; 9, pp. 330-338. [DOI: https://dx.doi.org/10.14569/IJACSA.2018.090543]
51. Yasin, S.; Ali, T.; Draz, U.; Jung, L.T.; Arshad, M.A. Formal Analysis of Coherent Non-Redundant Partition-based Motif Detection Algorithm for Data Visual Analytics. J. Appl. Environ. Biol. Sci.; 2018; 8, pp. 23-30.
52. Draz, U.; Ali, T.; Yasin, S.; Waqas, U.; Zahra, S.B.; Shoukat, M.A.; Gul, S. A Pattern Detection Technique of L-MYC for Lungs Cancer Oncogene in Bioinformatics Big Data. Proceedings of the 2020 17th International Bhurban Conference on Applied Sciences and Technology (IBCAST); Islamabad, Pakistan, 14–18 January 2020; IEEE: New York, NY, USA, 2020; pp. 218-223.
53. Ali, T.; Masood, K.; Irfan, M.; Draz, U.; Nagra, A.; Asif, M.; Alshehri, B.; Glowacz, A.; Tadeusiewicz, R.; Mahnashi, M. et al. Multistage Segmentation of Prostate Cancer Tissues Using Sample Entropy Texture Analysis. Entropy; 2020; 22, 1370. [DOI: https://dx.doi.org/10.3390/e22121370]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
COVID-19 syndrome has extensively escalated worldwide with the induction of the year 2020 and has resulted in the illness of millions of people. COVID-19 patients bear an elevated risk once the symptoms deteriorate. Hence, early recognition of diseased patients can facilitate early intervention and avoid disease succession. This article intends to develop a hybrid deep neural networks (HDNNs), using computed tomography (CT) and X-ray imaging, to predict the risk of the onset of disease in patients suffering from COVID-19. To be precise, the subjects were classified into 3 categories namely normal, Pneumonia, and COVID-19. Initially, the CT and chest X-ray images, denoted as ‘hybrid images’ (with resolution 1080 × 1080) were collected from different sources, including GitHub, COVID-19 radiography database, Kaggle, COVID-19 image data collection, and Actual Med COVID-19 Chest X-ray Dataset, which are open source and publicly available data repositories. The 80% hybrid images were used to train the hybrid deep neural network model and the remaining 20% were used for the testing purpose. The capability and prediction accuracy of the HDNNs were calculated using the confusion matrix. The hybrid deep neural network showed a 99% classification accuracy on the test set data.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details






1 Electrical Engineering Department, College of Engineering, Najran University, Najran 61441, Saudi Arabia;
2 Department of Computer Science, Lahore Campus, COMSATS University Islamabad, Lahore 54000, Pakistan;
3 Department of Computer Science, University of OKara, Okara 56130, Pakistan;
4 Department of Computer Science, University of Sahiwal, Sahiwal 57000, Pakistan;
5 Computer Science Department, Sahiwal Campus, COMSATS University Islamabad, Sahiwal 57000, Pakistan
6 Department of Computer Science, National Fertilizer Corporation Institute of Engineering and Technology, Multan 60000, Pakistan;
7 Department of Automatic Control and Robotics, Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering, AGH University of Science and Technology, 30-059 Kraków, Poland;
8 Faculty of Maritime Studies, King Abdulaziz University, Jeddah 21577, Saudi Arabia;