1. Introduction
A new pneumonia-type Coronavirus (COVID-19) was detected in Wuhan China in 2019, ref. [1]. Previous versions of this virus are known as SARS-CoV-2. This virus has a higher infection ability than other viruses. As a result, many people have been infected and require medical care in hospitals. Recent works can be found in [2,3,4,5,6].
A recent survey [7] explains all the different methods for COVID-19 detection. Transcription-polymerase chain reaction (RT-PCR) tests are being used to detect the virus in the human body. Computed tomography scans (CT scans) and X-ray images are other ways of identifying COVID-19 in the human body. X-ray images show SARS-CoV-2 infection areas on the human lungs. Furthermore, CT scans can be used to visualise the 3D lungs of a person to decide on the level of severity.
Convolutional neural networks (CNNs) are known as powerful models for image modelling. These can be used to diagnose COVID-19 on X-rays and CT scans. The main advantage of using CNN models is that they allow faster detection of the virus from images than doctors and radiologists. Today, CNN models are being used to diagnose viruses on medical images. These models have been utilised for skin lesion detection on digital images, refs. [8,9]. Authors have also used CNN models for retinography diagnosis on images, refs. [10,11,12]. Furthermore, CNN models allow the diagnosis of COVID-19 on X-rays and CT scans. The well-known CNN models are AlexNet [13], GoogleNet [14], VGG [15], MobileNetV2 [16], ResNet [17], and DenseNet [18]. These models are mainly trained using an ImageNet dataset [19] and then fine-tuned on medical images.
CNN models are data-hungry, and they require many images for the training process. However, accessing a large number of CT images might not be possible during a pandemic or may require a long time. Therefore, inadequate data might hamper the usage of an artificial intelligence-based model for COVID-19 detection. On the other hand, data-efficient CNN models are built on small sets of available images and allow fast modelling of the disease to diagnose COVID-19. Therefore, data-efficient models might make a significant contribution to the rapid diagnosis of COVID-19 during a pandemic.
In this paper, the data-efficient GAN based CNN method (Figure 1) allows COVID-19 detection from CT scans. This paper presents enhanced data-efficient convolutional neural networks to detect COVID-19 from CT scans. The proposed approach is based on the generation of synthetic and augmented images and then generates CNN models using these datasets. Synthetic images allow more information to be extracted from the CT scans and then modelled by CNN models. The enhanced model performances and augmented data-based CNN models are compared on two publicly datasets. The results show that the enhanced models outperformed the classic CNN models.
The advantages of the proposed method are as follows:
The proposed model builds on augmented and synthetic CT images of the chests of COVID-19 patients, whereas the classic models only employ augmented images, refs. [20,21,22,23].
The proposed CNN learns more possible COVID-19 signs from synthetic CT images then the classic CNN models.
The main novelties of this work are:
The proposed novel method utilises a GAN model to generate unseen COVID-19 and normal CT images from a small database. This approach allows the CNN model to learn all possible image deformations for better modelling of the CT images. In contrast, classic CNN models build on data augmentation techniques for improved performance. However, image augmentation allows the generation of more COVID-19 and normal images with different views and orientations. This is problematic since the deformation of the lungs on CT images is the same in the generated data.
A method is proposed for fusing synthetic and augmented CT scans for generating enhanced CNN models for COVID-19 detection.
Data-efficient enhanced ResNet-18, ResNet-50, VGG, MobileNetV2, AlexNet, and DensNet121 models are proposed for the diagnosis of COVID-19.
The content of this paper is as follows. First, information about the related databases is given. Second, the generation of augmented images is described. Then, the GAN model for synthetic CT scan image creation is explained. The architecture of the GAN model is also explained. Furthermore, the enhanced CNN models based on GAN models are described. Finally, the performance of the proposed methods is evaluated and discussed.
2. Related Work
2.1. Convolutional Neural Networks
He et al. [24] and Hu et al. [25] used CNN models to detect COVID-19 on CT images. He et al. [24] proposed a new transfer learning approach to train a CNN on available CT images. Furthermore, Hu et al. [25] proposed the generation of a CNN model using a small number of CT images.
Mei et al. [26] combined a convolutional neural network and a support vector machine to classify COVID-19 related CT images. The new model architecture was described for more accurate COVID-19 identification on CT images.
Harmon et al. [27] proposed a DensNet-121 network for differentiating COVID-19 from pneumonia virus. The classification accuracy was evaluated using several datasets.
Bhandary et al. [28] replaced the last layer of several CNN model architectures and they introduced a support vector machine. The authors evaluated the performance of this new architecture for COVID-19 diagnosis. The proposed network also detected cancer using CT and X-ray images.
Butt et al. [29] use 3D CT scans for classifying COVID-19, and viral pneumonia. The authors processed each of the CT image patches and then used these patches as inputs to the ResNet-18 model to detect the COVID-19 virus.
2.2. Generative Adversarial Networks
Waheed et al. [1] and Loey et al. [30] utilised a convolutional neural network and generative adversarial network (GAN) for the diagnosis of COVID-19. The authors generated synthetic medical images using a GAN model to create a CNN model.
Generative adversarial networks have been used for medical imaging. The well-known GAN models are vanilla GAN, deep convolutional GAN (DCGAN), pix2pix, and CycleGAN, ref. [31]. The authors of [1,32,33,34] mainly used the vanilla GANs and DPGAN models for generating synthetic images.
3. Method
Figure 1 shows the proposed data-efficient convolutional neural network (CNN) model for COVID-19 detection. The proposed novel method uses the deep convolutional generative adversarial network (DCGAN) and the data augmentation technique to increase the small number of available CT scans. First, the data augmentation technique allows for the creation of many images. Then, these images are used as input to the DCGAN model to produce synthetic images. Then, a CNN model is trained on both synthetically generated and augmented images. In the testing part, the created CNN model provides COVID-19 and non-COVID-19 classification on CT scans.
The proposed data-efficient CNN models are generated and tested on COVID19-CT [24] and MosMed [35] datasets. Table 1 describes the number of images and related categories.
3.1. COVID19-CT Database
The authors of [24] prepared a COVID19-CT dataset for research purposes. This dataset is comprised of 349 COVID-19 and 397 normal CT images. Figure 2 also presents samples of COVID-19 and normal images.
3.2. Mosmed Database
The authors of [35] collected 1110 COVID-19 and non-COVID-19 CT scans from hospitals in Moscow, Russia. The authors also grouped COVID-19 related CT scans as a normal, mild, moderate, or severe condition. Sample images are presented in Figure 2.
3.3. Augmented Datasets
The DCGAN model allows the production of synthetic versions of the available images in the dataset. However, GAN models require a number of pictures for accurate modelling of the images in the dataset. Therefore, available images are augmented to increase the quantity. Then the DCGAN uses these enlarged images to produce synthetic images.
Table 2 shows the number of augmented (Aug.) CT scans for the COVID19-CT and Mosmed datasets. The datasets are split into training and testing sets. Then, CT scans of the training set are rotated to increase their numbers in both datasets. We applied the image rotation technique to augment the training set.
Table 2 also shows the number of synthetically generated (GAN) CT scans. The datasets included a combination of synthetic and augmented images (Aug+GAN) and the number of combined images is also reported in Table 2.
3.4. Synthetic CT Image Generation
Synthetic CT image generation is based on the DCGAN method [34] in Figure 1 to generate synthetic CT images. The DCGAN method is the improved version of the GAN method [33]. Figure 3 also shows the synthetically generated CT scans of the COVID19-CT dataset. This convolutional neural network comprises a generator and discriminator. The discriminator builds on convolution layers while the generator builds on convolutional transpose layers.
This method employs the generator to produce synthetic images and then utilises the discriminator to classify a given synthetic image of a real image. Latent vectors of distribution are used as inputs to the generator and then the generator outputs synthetic images. These generated images are used as inputs to discriminate together with real photos. Finally, the discriminator classifies the input image as a real or synthetic image. This process is repeated for many latent vectors, and the optimisation function is minimised. This function is defined by
(1)
where data are denoted by x, generator distribution and noise variables are denoted by and respectively.The discriminator employs several convolutions, and this method utilises batch normalisation and LeakyReLU operations after each convolution.
Figure 2Sample images of COVID19-CT dataset. (a) COVID-19, (b) Normal.
[Figure omitted. See PDF]
Figure 3Sythetic images of COVID19-CT dataset. (a) COVID-19, (b) Normal.
[Figure omitted. See PDF]
3.5. Model Generations
The AlexNet, VGG, ResNet-18, ResNet-50, MobileNetV2, and DensNet-121 networks were generated using augmented and synthetic images. All RGB images of 224 × 224 × 3 sizes are used as inputs to fine-tuned convolutional neural network models (CNNs) for training. All networks are pretrained on the Imagenet dataset, and then these models are further trained using augmented and synthetic images. Fine-tuning is achieved by freezing all convolutional layers and adapting the last fully connected layer for COVID-19 and non-COVID-19 classification.
3.6. COVID-19 Prediction
Figure 1 presents the data-efficient deep learning method. Each of the CT images is used as input to the deep network and then CT images are classified as COVID-19 or non-COVID-19.
3.7. Components of the Optimisation Function
Optimisation values are as follows. We use 0.9 and 256 for momentum and the batch size respectively. The learning value is 0.0001. We use 50 epochs for optimizing the function.
3.8. Implementation
A desktop computer was used to run the experiments. This computer is equipped with an Intel Corei7-4790 CPU and NVIDIA GeForce GTX-1080Ti graphics card.
3.9. Softwave
We used Pytorch deep learning library for generating and testing the proposed methodology.
4. Performance Evaluation
The performance evaluation of the classic deep learning method and the proposed data-efficient methods were evaluated using the COVID19-CT and Mosmed datasets.
Performance evaluation was performed using the metrics described as follows. The area under the receiver operating characteristic (ROC) curve (AUC), accuracy (ACC), sensitivity (SE), and specificity (SP) performance merits were used to test the accuracy of the methods. Accuracy, sensitivity, and specificity can be described as:
(2)
(3)
(4)
where true positive, positive, true negative, false positive, and false negative are denoted as TP, TN, FP, and FN, respectively.Comparison between Classic Deep Learning Method and Proposed Data-Efficient Method
Table 3 reports the performances of the AlexNet, VGG, ResNet-18, ResNet-50, MobileNetV2, and DensNet-121 deep learning models. These models only build on augmented data of the CT scans. These models were evaluated for both the COVID19-CT and Mosmed datasets.
The performances of the proposed data-efficient models are also evaluated in Table 3. Table 3 also reports the performances of the AlexNet, VGG, ResNet-18, ResNet-50, MobileNetV2, and DensNet-121 deep learning models. These models build on both augmented and synthetic data of the CT scans. These models were evaluated for both the COVID19-CT and Mosmed datasets.
Table 3 also reports a comparison between the classic deep learning method and the proposed data-efficient method. First, the comparisons of the models were performed using the COVID19-CT dataset. All data-efficient AlexNet, VGG, ResNet-18, ResNet-50, MobileNetV2, and DensNet-121 models outperformed the classic convolutional models. Furthermore, the ResNet-18 model built on augmented and synthetic data outperformed the ResNet-18 model, which only builds on augmented data. This model provided a ROC value. This augmented and synthetic data-based model also outperformed all other models.
The comparisons of the models were performed using the Mosmed dataset. All data-efficient AlexNet, VGG, ResNet-18, ResNet-50, MobileNetV2, and DensNet-121 models outperformed the classic convolutional models. Furthermore, the MobileNetV2 model built on augmented and synthetic data outperformed the MobileNetV2 model, which only builds on augmented data. This model provided a ROC value. This augmented and synthetic data-based model also outperformed all other models.
5. Discussion
The proposed models are also compared with the recent work of ref. [30]. ResNet-18 and ResNet-50 models give and sensitivity values. However the authors of ref. [30], report sensitivity value for ResNet-50 model. The sensitivity value is the measure of the COVID-19 detection. The proposed method shows that the COVID-19 was detected more accurately than other work. Training models on both synthetic and augmented datasets increase the sensitivity values. Table 3 also presents a comparison of the proposed and other works.
All data-efficient models outperformed the classic convolutional neural networks on the COVID19-CT and Mosmed datasets. These models show that building the CNN model on synthetic and augmented images allows better recognition of COVID-19 on CT scans. The reason is that GAN models produce different synthetic CT images related to different COVID-19 deform CT scans. Since synthetic images cover large variations of COVID-19 disease signs on CT scans, the CNN can capture all details of these signals on the images. When synthetically generated images are used in conjunction with augmented data, CNN models perform better than augmented data-based CNN models.
This study also shows that the different CNN models exhibited varying performance results for the two datasets. The models and their performances are listed in Table 4. In this table, the best performing model on the COVID19-CT dataset is ResNet-18, while the best performing model on the Mosmed dataset is MobileNetV2. Both of these models employed augmented and synthetically generated CT images for COVID-19 detection.
6. Conclusions
The proposed data-efficient ResNet-18 and ResNet-50 models give and sensitivity values. The sensitivity value is the measure of the COVID-19 detection. The proposed method shows that the COVID-19 was detected more accurately than other works. We found that we can generate accurate deep networks using limited data. We achieved this using synthetic and augmentation techniques.
The proposed machine learning-based systems provide faster healthcare services. Doctors and radiologists can also use these systems for collaboration to provide better help to patients.
The main novelty of the proposed method is that CNN networks can be generated from several available CT images during pandemic situations. It is well known that accessing CT images during a pandemic might be problematic. Therefore, this paper presents novel data-efficient networks for the diagnosis of the COVID-19 virus from CT images. This method builds on a convolutional neural network and a generative adversarial network (DGAN). The method utilises the DGAN model to produce unseen COVID-19 and normal CT images from a small dataset. Then, the CNN network uses these synthetically generated CT images to learn all possible virus signs on the images. The experiments prove that the proposed method allows higher accuracy results than the classic CNN networks.
Methodology, S.S., M.A.D. and F.A.-T. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
Not applicable.
The authors do not have a conflict of interest.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 1. Proposed data-efficient deep convolutional neural network model for COVID-19 detection.
Total number of images in the datasets.
CT-Scan | Dataset | Train | Test |
---|---|---|---|
COVID-19 | COVID19-CT | 324 | 40 |
Normal | COVID19-CT | 293 | 37 |
COVID-19 | Mosmed | 168 | 20 |
Normal | Mosmed | 168 | 20 |
Total number of images in the datasets.
Data | CT-Scan | Dataset | Train | Test |
---|---|---|---|---|
Aug. | COVID-19 | COVID19-CT | 1393 | 40 |
Aug. | Normal | COVID19-CT | 1672 | 37 |
GAN | COVID-19 | COVID19-CT | 500 | 40 |
GAN | Normal | COVID19-CT | 500 | 37 |
Aug+GAN | Normal | COVID19-CT | 2172 | 37 |
Aug+GAN | COVID-19 | COVID19-CT | 1893 | 37 |
Aug | COVID-19 | Mosmed | 1087 | 23 |
Aug | Normal | Mosmed | 1069 | 23 |
GAN | COVID-19 | Mosmed | 128 | 23 |
GAN | Normal | Mosmed | 128 | 23 |
Aug+GAN | Normal | Mosmed | 1197 | 23 |
Aug+GAN | COVID-19 | Mosmed | 1218 | 23 |
Performance comparisons.
Network | Disease | Data | AUC | ACC | SE | SP |
---|---|---|---|---|---|---|
Resnet18 | COVID19-CT | Aug | 0.77 | 0.75 | 0.83 | 0.71 |
Resnet18 | COVID19-CT | Aug+GAN | 0.89 | 0.74 | 0.88 | 0.68 |
Resnet50 | COVID19-CT | Aug | 0.71 | 0.77 | 0.86 | 0.72 |
Resnet50 | COVID19-CT | Aug+GAN | 0.81 | 0.73 | 0.95 | 0.66 |
Vgg | COVID19-CT | Aug | 0.65 | 0.75 | 0.86 | 0.70 |
Vgg | COVID19-CT | Aug+GAN | 0.67 | 0.76 | 0.87 | 0.70 |
MobileNetV2 | COVID19-CT | Aug | 0.71 | 0.73 | 0.82 | 0.69 |
MobileNetV2 | COVID19-CT | Aug+GAN | 0.77 | 0.73 | 0.84 | 0.68 |
Densenet121 | COVID19-CT | Aug | 0.70 | 0.74 | 0.87 | 0.69 |
Densenet121 | COVID19-CT | Aug+GAN | 0.77 | 0.67 | 0.92 | 0.61 |
AlexNet | COVID19-CT | Aug | 0.60 | 0.67 | 0.72 | 0.64 |
AlexNet | COVID19-CT | Aug+GAN | 0.80 | 0.69 | 0.88 | 0.64 |
AlexNet | MosMed | Aug | 0.71 | 0.70 | 1.00 | 0.63 |
AlexNet | MosMed | Aug+GAN | 0.73 | 0.66 | 0.89 | 0.60 |
MobileNetV2 | MosMed | Aug | 0.77 | 0.67 | 0.69 | 0.65 |
MobileNetV2 | MosMed | Aug+GAN | 0.84 | 0.62 | 0.65 | 0.60 |
Resnet50 | MosMed | Aug | 0.74 | 0.69 | 0.69 | 0.69 |
Resnet50 | MosMed | Aug+GAN | 0.78 | 0.69 | 0.69 | 0.69 |
Resnet18 | MosMed | Aug | 0.70 | 0.67 | 0.68 | 0.66 |
Resnet18 | MosMed | Aug+GAN | 0.75 | 0.69 | 0.69 | 0.69 |
Vgg | MosMed | MA | 0.63 | 0.69 | 0.71 | 0.68 |
Vgg | MosMed | Aug+GAN | 0.71 | 0.66 | 0.67 | 0.64 |
Densenet121 | MosMed | Aug | 0.60 | 0.65 | 0.64 | 0.65 |
Densenet121 | MosMed | Aug+GAN | 0.62 | 0.61 | 0.63 | 0.60 |
Performance comparisons.
Network | Model | AUC | ACC | SE | SP |
---|---|---|---|---|---|
[ |
AlexNet | - | 75.73 | 63.83 | 87.62 |
[ |
VGG16 | - | 0.90 | 0.91 | 0.80 |
Proposed Work | ResNet18 | 0.89 | 0.74 | 0.88 | 0.68 |
References
1. Waheed, A.; Goyal, M.; Gupta, D.; Khanna, A.; Al-Turjman, F.; Pinheiro, P.R. CovidGAN: Data Augmentation Using Auxiliary Classifier GAN for Improved Covid-19 Detection. IEEE Access; 2020; 8, pp. 91916-91923. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.2994762] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34192100]
2. Nasir, A.; Shaukat, K.; Hameed, I.A.; Luo, S.; Alam, T.M.; Iqbal, F. A Bibliometric Analysis of Corona Pandemic in Social Sciences: A Review of Influential Aspects and Conceptual Structure. IEEE Access; 2020; 8, pp. 133377-133402. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3008733] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34812340]
3. Alali, Y.; Harrou, F.; Sun, Y. A proficient approach to forecast COVID-19 spread via optimized dynamic machine learning models. Sci. Rep.; 2022; 12, 2467. [DOI: https://dx.doi.org/10.1038/s41598-022-06218-3] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35165290]
4. Baig, T.I.; Alam, T.M.; Anjum, T.; Naseer, S.; Wahab, A.; Imtiaz, M.; Raza, M.M. Classification of Human Face: Asian and Non-Asian People. Proceedings of the 2019 International Conference on Innovative Computing (ICIC); Seoul, Korea, 26–29 August 2019; pp. 1-6. [DOI: https://dx.doi.org/10.1109/ICIC48496.2019.8966721]
5. Alam, T.M.; Shaukat, K.; Khelifi, A.; Khan, W.A.; Raza, H.M.E.; Idrees, M.; Luo, S.; Hameed, I.A. Disease diagnosis system using IoT empowered with fuzzy inference system. Comput. Mater. Contin.; 2022; 7, pp. 5305-5319. [DOI: https://dx.doi.org/10.32604/cmc.2022.020344]
6. Kogilavani, S.; Prabhu, J.; Sandhiya, R.; Kumar, M.S.; Subramaniam, U.; Karthick, A.; Muhibbullah, M.; Imam, S.B.S. COVID-19 detection based on lung CT scan using deep learning techniques. Comput. Math. Methods Med.; 2022; 2022, 7672196. [DOI: https://dx.doi.org/10.1155/2022/7672196]
7. Aileni, M.; Rohela, G.K.; Jogam, P.; Soujanya, S.; Zhang, B. Biotechnological Perspectives to Combat the COVID-19 Pandemic: Precise Diagnostics and Inevitable Vaccine Paradigms. Cells; 2022; 11, 1182. [DOI: https://dx.doi.org/10.3390/cells11071182]
8. Serte, S.; Demirel, H. Gabor wavelet-based deep learning for skin lesion classification. Comput. Biol. Med.; 2019; 113, 103423. [DOI: https://dx.doi.org/10.1016/j.compbiomed.2019.103423]
9. Serte, S.; Demirel, H. Wavelet-based deep learning for skin lesion classification. IET Image Process.; 2020; 14, pp. 720-726. [DOI: https://dx.doi.org/10.1049/iet-ipr.2019.0553]
10. Serener, A.; Serte, S. Geographic variation and ethnicity in diabetic retinopathy detection via deeplearning. Turk. J. Electr. Eng. Comput. Sci.; 2020; 28, pp. 664-678. [DOI: https://dx.doi.org/10.3906/elk-1902-131]
11. Serener, A.; Serte, S. Transfer Learning for Early and Advanced Glaucoma Detection with Convolutional Neural Networks. Proceedings of the 2019 Medical Technologies Congress (TIPTEKNO); Izmir, Turkey, 3–5 October 2019; pp. 1-4.
12. Serte, S.; Serener, A. A Generalized Deep Learning Model for Glaucoma Detection. Proceedings of the 2019 3rd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT); Ankara, Turkey, 11–13 October 2019; pp. 1-5.
13. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems 25; Pereira, F.; Burges, C.J.C.; Bottou, L.; Weinberger, K.Q. National Science Foundation: Alexandria, VA, USA, 2012; pp. 1097-1105.
14. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Boston, MA, USA, 7–12 June 2015; pp. 1-9.
15. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv; 2014; arXiv: 1409.1556
16. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L. MobileNetV2: Inverted Residuals and Linear Bottlenecks. Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition; Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510-4520.
17. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Las Vegas, NV, USA, 27–30 June 2016; pp. 770-778.
18. Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. arXiv; 2016; arXiv: 1608.06993
19. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. ImageNet: A Large-Scale Hierarchical Image Database. Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition; Miami, FL, USA, 20–25 June 2009.
20. Almezhghwi, K.; Serte, S.; Al-Turjman, F. Convolutional neural networks for the classification of chest X-rays in the IoT era. Multimed. Tools Appl.; 2021; 80, pp. 29051-29065. [DOI: https://dx.doi.org/10.1007/s11042-021-10907-y] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34155434]
21. Serte, S.; Demirel, H. Deep learning for diagnosis of COVID-19 using 3D CT scans. Comput. Biol. Med.; 2021; 132, 104306. [DOI: https://dx.doi.org/10.1016/j.compbiomed.2021.104306]
22. Serte, S.; Serener, A. Graph-based saliency and ensembles of convolutional neural networks for glaucoma detection. IET Image Process.; 2021; 15, pp. 797-804. [DOI: https://dx.doi.org/10.1049/ipr2.12063]
23. Serte, S.; Serener, A.; Al-Turjman, F. Deep learning in medical imaging: A brief review. Trans. Emerg. Telecommun. Technol.; 2020; e4080. [DOI: https://dx.doi.org/10.1002/ett.4080]
24. He, X.; Yang, X.; Zhang, S.; Zhao, J.; Zhang, Y.; Xing, E.; Xie, P. Sample-Efficient Deep Learning for COVID-19 Diagnosis Based on CT Scans. medRxiv; 2020; [DOI: https://dx.doi.org/10.1101/2020.04.13.20063941]
25. Hu, S.; Gao, Y.; Niu, Z.; Jiang, Y.; Li, L.; Xiao, X.; Wang, M.; Fang, E.F.; Menpes-Smith, W.; Xia, J. et al. Weakly Supervised Deep Learning for COVID-19 Infection Detection and Classification From CT Images. IEEE Access; 2020; 8, pp. 118869-118883. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3005510]
26. Mei, X.; Lee, H.C.; Diao, K.Y.; Huang, M.; Lin, B.; Liu, C.; Xie, Z.; Ma, Y.; Robson, P.; Chung, M. et al. Artificial intelligence–enabled rapid diagnosis of patients with COVID-19. Nat. Med.; 2020; 26, pp. 1224-1228. [DOI: https://dx.doi.org/10.1038/s41591-020-0931-3]
27. Harmon, S.A.; Sanford, T.H.; Xu, S.; Turkbey, E.B.; Roth, H.; Xu, Z.; Yang, D.; Myronenko, A.; Anderson, V.; Amalou, A. et al. Artificial intelligence for the detection of COVID-19 pneumonia on chest CT using multinational datasets. Nat. Commun.; 2020; 11, 4080. [DOI: https://dx.doi.org/10.1038/s41467-020-17971-2]
28. Bhandary, A.; Prabhu, G.A.; Rajinikanth, V.; Thanaraj, K.P.; Satapathy, S.C.; Robbins, D.E.; Shasky, C.; Zhang, Y.D.; Tavares, J.M.R.; Raja, N.S.M. Deep-learning framework to detect lung abnormality–A study with chest X-ray and lung CT scan images. Pattern Recognit. Lett.; 2020; 129, pp. 271-278. [DOI: https://dx.doi.org/10.1016/j.patrec.2019.11.013]
29. Butt, C.; Gill, J.; Chun, D.; Babu, B.A. Deep learning system to screen coronavirus disease 2019 pneumonia. Appl. Intell.; 2020; pp. 1-7. [DOI: https://dx.doi.org/10.1016/j.eng.2020.04.010]
30. Loey, M.; Manogaran, G.; Khalifa, N.E.M. A deep transfer learning model with classical data augmentation and CGAN to detect COVID-19 from chest CT radiography digital images. Neural Comput. Appl.; 2020; pp. 1-13. [DOI: https://dx.doi.org/10.1007/s00521-020-05437-x] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33132536]
31. Yi, X.; Walia, E.; Babyn, P. Generative adversarial network in medical imaging: A review. Med. Image Anal.; 2019; 58, 101552. [DOI: https://dx.doi.org/10.1016/j.media.2019.101552] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31521965]
32. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. Advances in Neural Information Processing Systems 27; Ghahramani, Z.; Welling, M.; Cortes, C.; Lawrence, N.D.; Weinberger, K.Q. Curran Associates, Inc.: Red Hook, NY, USA, 2014; pp. 2672-2680.
33. Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. arXiv; 2014; arXiv: 1412.6572
34. Radford, A.; Metz, L.; Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv; 2015; arXiv: 1511.06434
35. Morozov, S.P.; Andreychenko, A.E.; Pavlov, N.A.; Vladzymyrskyy, A.V.; Ledikhova, N.V.; Gombolevskiy, V.A.; Blokhin, I.A.; Gelezhe, P.B.; Gonchar, A.V.; Chernina, V.Y. MosMedData: Chest CT Scans With COVID-19 Related Findings Dataset. arXiv; 2020; arXiv: 2005.06465
36. Acar, E.; Şahin, E.; Yılmaz, İ. Improving effectiveness of different deep learning-based models for detecting COVID-19 from computed tomography (CT) images. Neural Comput. Appl.; 2021; 33, pp. 17589-17609. [DOI: https://dx.doi.org/10.1007/s00521-021-06344-5]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Healthcare is one of the crucial aspects of the Internet of things. Connected machine learning-based systems provide faster healthcare services. Doctors and radiologists can also use these systems for collaboration to provide better help to patients. The recently emerged Coronavirus (COVID-19) is known to have strong infectious ability. Reverse transcription-polymerase chain reaction (RT-PCR) is recognised as being one of the primary diagnostic tools. However, RT-PCR tests might not be accurate. In contrast, doctors can employ artificial intelligence techniques on X-ray and CT scans for analysis. Artificial intelligent methods need a large number of images; however, this might not be possible during a pandemic. In this paper, a novel data-efficient deep network is proposed for the identification of COVID-19 on CT images. This method increases the small number of available CT scans by generating synthetic versions of CT scans using the generative adversarial network (GAN). Then, we estimate the parameters of convolutional and fully connected layers of the deep networks using synthetic and augmented data. The method shows that the GAN-based deep learning model provides higher performance than classic deep learning models for COVID-19 detection. The performance evaluation is performed on COVID19-CT and Mosmed datasets. The best performing models are ResNet-18 and MobileNetV2 on COVID19-CT and Mosmed, respectively. The area under curve values of ResNet-18 and MobileNetV2 are
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details

1 Department of Electrical and Electronic Engineering, Near East University, North Cyprus via Mersin 10, Nicosia 99138, Turkey
2 Department of Radiology, Dr. Suat Günsel University Faculty of Medicine Kyrenia, North Cyprus via Mersin 10, Kyrenia 99300, Turkey;
3 Artificial Intelligence Engineering Department, Research Center for AI and IoT, AI and Robotics Institute, Near East University, North Cyprus via Mersin 10, Nicosia 99138, Turkey;