1. Introduction
Artificial intelligence (AI) is a popular topic in radiology such as for rapid disease (e.g., COVID-19) detection on various platforms including mobile devices [1,2,3,4,5,6,7,8,9,10,11,12]. Additionally, the number of AI research articles in radiology has grown exponentially over recent years [1,2]. Various commercial AI products have been available for applications in clinical practice such as radiological examination dose optimization [13,14,15,16,17,18,19,20,21,22,23,24,25,26], computer-aided detection and diagnosis (CAD) [27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48], and medical image segmentation [49,50,51,52,53]. Predominantly, these applications in radiology are based on deductive AI techniques [1,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54]. However, generative AI, especially the generative adversarial network (GAN) which focuses on the creation of new and original content, has started attracting the attention of radiology researchers and clinicians as evidenced by a number of literature reviews on the role of GAN in radiology published in the last few years [54,55,56,57,58,59,60,61,62].
The GAN was devised by Goodfellow et al. in 2014 [56,59,62,63]. Its basic form (also known as Vanilla GAN) consists of two models, a generator and a discriminator. Development of this GAN model requires training the generator to produce fake images while the discriminator is responsible for determining whether the image produced by the generator is fake or real. The training is completed upon the discriminator unable to indicate the generator’s output images are fake, and hence the generator becomes capable of producing high-quality fake images close to the real ones [56,59,62,63,64,65]. This capability is highly relevant to medical imaging and therefore radiology [64,65]. Its current applications in radiology include image synthesis and data augmentation [1,55,56,57,59,60,61,62], image translation (e.g., from one modality to another one [1,55,56,58,59,60,61,62], from normal to abnormal [1,55,62], etc.), image reconstruction (e.g., denoising [1,55,59,60,61], artifact removal [1,56,58,61], super-resolution (image spatial resolution improvement) [1,55,56,57,59,61,64,65], motion unsharpness correction [61], etc.), image feature extraction [55,57,60,61], image segmentation [1,55,56,57,60,61,62], anomaly detection [55,56,60], disease diagnosis [55,57,60], prediction [55,56,61] and prognosis [55,57,60,61], and image registration [1,55,60,61].
Pediatric radiology is a subset of radiology [26,28,29,66,67]. The aforementioned review findings may not be applicable to pediatric radiology [28,29,55,56,57,58,59,60,61,62,67]. For example, the application of GAN for prostate cancer segmentation appears not relevant to children [60,68]. Although several literature reviews about AI in pediatric radiology have been published, none of them focused on the GAN [26,28,29,67]. Given that the GAN is an important topic area in radiology and the recent literature reviews focused on its applications in this discipline, it is timely to conduct a systematic review of its applications in pediatric radiology [29,55,56,57,58,59,60,61,62]. The purpose of this article is to systematically review published original studies to answer the question “What are the applications of GAN in pediatric radiology, their performances, and methods for their performance evaluation?”.
2. Materials and Methods
This systematic review of the GAN in pediatric radiology was carried out according to the PRISMA guidelines and patient/population, intervention, comparison, and outcome (PICO) model (Table 1) [26,29,69]. Four major processes, literature search, article selection, and data extraction and synthesis were involved [26,29].
2.1. Literature Search
The electronic scholarly publication databases, EBSCOhost/Cumulative Index of Nursing and Allied Health Literature (CINAHL) Ultimate, Ovid/Embase, PubMed/Medline, ScienceDirect, Scopus, SpringerLink, Web of Science, and Wiley Online Library were used for literature search on 6 April 2023 to identify articles about the GAN in pediatric radiology and publication year was not restricted. The search statement, (“Generative Adversarial Network” OR “Generative Artificial Intelligence”) AND (“Pediatric” OR “Children”) AND (“Radiology” OR “Medical Imaging”) was used. The review focus was used to derive the search keywords [26,29].
2.2. Article Selection
Article selection was conducted by one reviewer with a literature review experience of more than 20 years [26,29,70]. Table 2 shows the article’s inclusion and exclusion criteria.
The exclusion criteria of Table 2 were established because of: 1. unavailability of well-developed methodological guidelines for appropriate grey literature selection; 2. Incomplete study information given in conference abstracts; 3. a lack of primary evidence in editorials, reviews, perspectives, opinions, and commentary; and 4. unsubstantiated information given in non-peer-reviewed papers [26,29,62,71]. The detailed process of the article selection is shown in Figure 1 [26,29,69]. Duplicate papers were first removed from the database search results. Subsequently, article titles, abstracts, and full texts were assessed against the selection criteria. Each non-duplicate paper in the search results was kept unless a decision on its exclusion could be made. Additionally, relevant articles were identified by checking reference lists of the included papers [26,29,71].
2.3. Data Extraction and Synthesis
Three systematic reviews on the GAN for image classification and segmentation in radiology [62], AI for radiation dose optimization [26] and CAD in pediatric radiology [29], and one narrative review about the GAN in adult brain imaging [56] were used to develop a data extraction form (Table 3). The data, author name and country, publication year, imaging modality, GAN architecture (such as cycle-consistent GAN (CycleGAN)), study design (either prospective or retrospective), patient/population (e.g., 0–10-year-old children), dataset source (such as public cardiac magnetic resonance imaging (MRI) dataset by Children’s Hospital Los Angeles, USA) and size (e.g., total: 33 scans-training: 25; validation: 4; testing: 4, etc.), any sample size calculation, application area (such as image synthesis and data augmentation), model commercial availability, model internal validation type (e.g., 4-fold cross-validation, etc.), any model external validation (i.e., any testing of model based on dataset not used in internal validation and obtained from different setting), reference standard for establishing ground truth (such as expert consensus), any comparison of performance of model with clinician, and key findings of model performance (e.g., area under receiver operating characteristic curve (AUC), sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy, and F1 score, etc.) were extracted from every article included [26,29,56,62]. For facilitating GAN model performance comparison, improvement figures such as improvement percentages when the GAN was used were synthesized (if not reported) based on the available absolute figures (if feasible) [26]. When a study reported performances for more than one GAN model, only the best-performing model performance values were shown [29,72]. Meta-analysis was not performed as this systematic review included a range of GAN applications, resulting in high study heterogeneity which would affect its usefulness [29,73,74,75]. The quality assessment tool for studies with diverse designs (QATSDD) was used to determine quality percentages for all included papers [26,71,76]. <50%, 50–70%, and >70% represented low, moderate, and high qualities of study, respectively [26,71].
3. Results
Thirty-seven papers that met the selection criteria were included in this review. These study characteristics are shown in Table 3. All identified articles were published over the last five years and the publication number increased every year with the highest number in 2022 [77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113]. This increasing trend was in line with the one in radiology [1,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113]. About half of the articles (n = 17) were journal papers [77,78,82,84,87,90,92,97,98,99,100,101,102,103,105,109,111]. Around two-thirds of these (n = 11) were determined as being of high quality [82,84,87,90,92,97,102,103,105,109,111]. All low-quality ones were conference papers (n = 12) [79,80,81,83,85,86,91,93,94,95,104,108]. The GAN was commonly applied to MRI (n = 18) [77,78,83,84,87,90,97,101,103,104,105,106,108,109,110,111,112,113] and X-ray (n = 13) [79,80,89,91,92,94,95,96,98,99,100,102,107], and the others included computed tomography (CT) (n = 4) [82,86,93,97], ultrasound (n = 2) [85,88] and positron emission tomography (PET) (n = 1) [81]. Although the basic GAN architecture was still popular among the included studies (n = 11) [77,78,80,82,83,84,89,94,97,99,106], its variant, cycle-consistent GAN (CycleGAN), was the second most common (n = 10) [101,102,103,104,107,108,109,110,111,112].
Table 3Characteristics of generative adversarial network (GAN) studies in pediatric radiology (grouped by their applications).
Author, Year & Country | Modality | GAN Architecture | Study Design | Patient/Population | Dataset Source | Dataset Size | Sample Size Calculation | Application Area | Commercial Availability | Internal Validation Type | External Validation | Reference Standard | AI VS Clinician | Key Findings |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Disease Diagnosis | ||||||||||||||
Kuttala et al. (2022)—Australia, India, and the United Arab Emirates [77] | MRI | GAN | Retrospective | Children (median ages: 12.6 (baseline) and 15.0 (follow-up) years | Public brain MRI dataset (Autism Brain Imaging Data Exchange II) | Total: 70 scans-training: 24; testing: 46 | No | Autism diagnosis based on brain MRI images | No | NR | No | NR | No | 158.6% accuracy (U-Net: 0.370; GAN: 0.957) and 114.3% AUC (U-Net: 0.420; GAN: 0.900) improvements for autism diagnoses, respectively |
Kuttala et al. (2022)—Australia, India, and the United Arab Emirates [78] | MRI | GAN | Retrospective | Children (median ages: 12 (baseline) and 15 (follow-up) years | Public brain MRI datasets (ADHD-200 and Autism Brain Imaging Data Exchange II) | Total: 265 scans-training: 48; testing: 217 | No | ADHA and autism diagnosis based on brain MRI images | No | NR | No | NR | No | 29.6% and 39.7% accuracy improvements for ADHD and autism diagnoses (3D CNN: 0.659 and 0.700; GAN: 0.854 and 0.978), respectively. GAN AUC: 0.850 (ADHD) and 0.910 (autism) |
Motamed and Khalvati (2021)—Canada [79] | X-ray | DCGAN | Retrospective | 1–5-year-old children | Public CXR dataset by Guangzhou Women and Children’s Medical Center, China | Total: 4875 images-training: 3875; testing: 1000 | No | Pneumonia diagnosis based on CXR | No | NR | No | NR | No | 3.5% AUC improvement (Deep SVDD: 0.86; DCGAN: 0.89) |
Image Reconstruction | ||||||||||||||
Dittimi and Suen (2020)—Canada [80] | X-ray | GAN | Retrospective | 1–5-year-old children | Public CXR dataset by Guangzhou Women and Children’s Medical Center, China | Total: 5863 images | No | CXR image reconstruction (super-resolution) | No | 70:30 random split | No | Original CXR images | No | 19.1% SSIM (SRCNN: 0.832; SRCNN-GAN: 0.991) and 46.5% PSNR (SRCNN: 26.18; SRCNN-GAN: 38.36 dB) improvements |
Fu et al. (2022)—China [81] | PET | TransGAN | Retrospective | Children | Private brain PET dataset by Hangzhou Universal Medical Imaging Diagnostic Center, China | Total: 45 scans | No | Brain PET image reconstruction (denoising) | No | 10-fold cross-validation | No | Original full-dose PET images | No | 10.3% SSIM (U-Net: 0.861; TransGAN-SDAM: 0.950) and 29.9% PSNR (U-Net: 26.1; TransGAN-SDAM: 33.9 dB) improvements with 67.7% VSMD reduction (U-Net: 0.133; TransGAN-SDAM: 0.043) |
Park et al. (2022)—Republic of Korea [82] | CT | GAN | Retrospective | 3 groups of children (mean ages (years): 6.2 ± 2.2; 7.2 ± 2.5; 7.4 ± 2.2) | Private abdominal CT dataset | Total: 3160 images-training: 1680; validation: 820; testing: 660 | No | Low-dose abdominal CT image reconstruction (denoising) | No | NR | Yes | Consensus of 1 pediatric and 1 abdominal radiologist (6 and 8 years’ experiences), respectively. | Yes | 42.7% noise reduction (LDCT: 12.4 ± 5.0; SAFIRE: 9.5 ± 4.0; GAN: 7.1 ± 2.7), and 39.3% (portal vein) and 45.8% (liver) SNR (LDCT: 22.9 ± 9.3 and 13.1 ± 5.7; SAFIRE: 30.1 ± 12.2 and 17.3 ± 7.6; GAN: 31.9 ± 13.0 and 19.1 ± 7.9) and 30.9% (portal vein) and 32.8% (liver) CNR (LDCT: 16.2 ± 7.5 and 6.4 ± 3.7; SAFIRE: 21.2 ± 9.8 and 8.5 ± 5.0; GAN: 21.2 ± 10.1 and 8.5 ± 4.3) improvements when compared with LDCT images, respectively. |
Pham et al. (2019)—France [83] | MRI | 3D GAN | Retrospective | Neonates | Public (Developing Human Connectome Project) and private brain MRI datasets by Reims Hospital, France | Total: 40 images-training: 30; testing: 10 | No | Brain MRI image reconstruction (super-resolution) and segmentation | No | NR | Yes | NR | No | 1.39% SSIM (non-DL: 0.9492; SRCNN: 0.9739; GAN: 0.9624) and 3.42% PSNR (non-DL: 30.70 dB; SRCNN: 35.84 dB; GAN: 31.75 dB) improvements for super-resolution and 12.4% DSC improvement for segmentation (atlas-based: 0.788; intensity-based: 0.818; GAN: 0.886) when compared with non-DL approaches, respectively |
Image Segmentation | ||||||||||||||
Decourt and Duong (2020)—Canada and France [84] | MRI | GAN | Retrospective | 2–18-year-old children | Private cardiac MRI dataset by Hospital for Sick Children in Toronto, Canada | Total: 33 scans-training: 25; validation: 4; testing: 4 | No | Cardiac MRI image segmentation | No | Cross-validation | Yes | Manual segmentation by clinicians | No | 2.4% mean DSC improvement (U-Net: 0.85; GAN: 0.87) with 3.8% mean HD reduction (U-Net: 2.55 mm; GAN: 2.46 mm) |
Guo et al. (2019)—China [85] | US | DNGAN | NR | 0–10-year-old children | Private echocardiography dataset by a Chinese hospital | Total: 87 scans-training: 1765 images; testing: 451 images | No | Echocardiography image segmentation | No | NR | No | NR | No | 4.6% mean DSC (U-Net: 0.88; DNGAN: 0.92), 7.6% mean Jaccard index (U-Net: 0.80; DNGAN: 0.86) and 8.5% mean PPV (U-Net: 0.86; DNGAN: 0.94) improvements but with 0.9% mean sensitivity reduction (U-Net: 0.93; DNGAN: 0.92) |
Kan et al. (2021)—USA [86] | CT | AC-GAN | NR | 1–17-year-old children | Private abdominal CT dataset by Medical College of Wisconsin, USA | Total: 64 scans | No | Abdominal CT image segmentation | No | 4-fold cross-validation | No | NR | No | 3.9% and 0.7% mean DSC improvements (U-Net: 0.697 and 0.923; GAN: 0.724 and 0.929) with 35.0% and 13.3% mean HD reductions (U-Net: 1.090 and 0.390 mm; GAN: 0.709 and 0.338 mm) for uterus and prostate segmentations, respectively |
Karimi-Bidhendi et al. (2020)—USA [87] | MRI | DCGAN | Retrospective | 2–18-year-old children | Public cardiac MRI datasets by Children’s Hospital Los Angeles, USA, and ACDC | Total: 159 scans-training: 41; testing: 118 | No | Cardiac MRI image segmentation | No | 80:20 random split | Yes | Manual image segmentation by a pediatric cardiologist sub-specialized in cardiac MRI | No | 34.5% mean DSC (cvi42: 0.631; U-Net: 0.782; DCGAN: 0.848), 38.5% Jaccard index (cvi42: 0.556; U-Net: 0.702; DCGAN: 0.770), 53.2% R2 (cvi42: 0.629; U-Net: 0.871; DCGAN: 0.963), 30.8% sensitivity (cvi42: 0.666; U-Net: 0.775; DCGAN: 0.872), 0.1% specificity (cvi42: 0.997; U-Net: 0.998; DCGAN: 0.998), 34.0% PPV (cvi42: 0.636; U-Net: 0.839; DCGAN: 0.852) and 0.4% NPV (cvi42: 0.995; U-Net: 0.997; DCGAN: 0.998) improvements with 24.7% mean HD (cvi42: 11.0 mm; U-Net: 11.0 mm; DCGAN: 8.3 mm) and 31.6% MCD reductions (cvi42: 4.4 mm; U-Net: 4.5 mm; DCGAN: 3.0 mm) when compared with cvi42 |
Zhou et al. (2022)—Canada [88] | US | pix2pix GAN | Prospective | Children | Private wrist US dataset by University of Alberta Hospital, Canada | Total: 57 scans-training: 47; testing: 10 | No | Wrist US image segmentation | No | NR | No | Manual segmentation by radiologist and sonographer with 18 and 7 years’ experience, respectively | No | 7.5% sensitivity improvement (U-Net: 0.642; GAN: 0.690) but with 5.6% DSC (U-Net: 0.698; GAN: 0.659), 8.6% Jaccard index (U-Net: 0.548; GAN: 0.501) and 17.8% PPV (U-Net: 0.783; GAN: 0.644) reductions |
Image Synthesis and Data Augmentation | ||||||||||||||
Banerjee et al. (2021)—India [89] | X-ray | GAN | Retrospective | 1–5-year-old children | Public CXR dataset by Guangzhou Women and Children’s Medical Center, China | Total: 5863 images | No | CXR image synthesis and data augmentation for DL-CAD model training | No | NR | No | NR | No | 13,921 images were generated for training the DL-CAD model for pneumonia with 6.3% accuracy improvement (with and without GAN: 0.986 and 0.928), respectively |
Diller et al. (2020)—Germany [90] | MRI | PG-GAN | Retrospective | Children with a median age of 15 years (IQR: 12.8–19.3 years) | Private cardiac MRI dataset by German Competence Network for Congenital Heart Defects | Total: 303 scans | No | Cardiac MRI image synthesis and data augmentation | No | NR | No | Ground truth determined by researchers | Yes | Mean rates of PG-GAN generated images identified by clinicians being fake: 70.5% (3 cardiologists) and 86.7% (2 cardiac MRI experts) |
Guo et al. (2021)—China [91] | X-ray | AC-GAN | Retrospective | 1–5-year-old children | Public CXR dataset by Guangzhou Women and Children’s Medical Center, China | Total: 5856 images-training: 1500; testing: 4356 | No | CXR image synthesis and data augmentation for DL-CAD model training | No | NR | No | NR | No | 250 pneumonia and 250 normal images generated for DL-CAD model training with 0.6% accuracy improvement (with and without AC-GAN: 0.913 and 0.907), respectively |
Guo et al. (2022)—China [92] | X-ray | AC-GAN | Prospective | 2–14-year-old children | Private CXR dataset by Quanzhou Women’s and Children’s Hospital, China | Total: 6442 images-training: 3600 | No | CXR image synthesis and data augmentation for DL-CAD model training | No | NR | No | NR | No | 2000 images generated with 7.7% and 13.5% differences between ground truth (IS: 2.08) and AC-GAN generated normal (IS: 1.92) and pneumonia (IS: 1.80) images, respectively. The use of AC-GAN images for training the DL-CAD model improved sensitivity (with and without AC-GAN: 0.86 and 0.62), specificity (with and without AC-GAN: 0.97 and 0.90), and accuracy (with and without AC-GAN: 0.91 and 0.76) by 38.7%, 7.8%, and 19.7%, respectively |
Kan et al. (2020)-USA [93] | CT | AC-GAN | NR | 1–18-year-old children | NR | Total: 5 scans | No | Pancreatic CT image synthesis and data augmentation | No | NR | No | NR | No | AC-GAN was able to generate high-resolution pancreas images with fine details and without any streak artifact and irregular pancreas contour when compared with DCGAN |
Khalifa et al. (2022)-Egypt [94] | X-ray | GAN | Retrospective | 1–5-year-old children | Public CXR dataset by Guangzhou Women and Children’s Medical Center, China | Total: 624 images | No | CXR image synthesis and data augmentation for DL-CAD model training | No | 80:20 random split | No | Specialist consensus | No | 5616 images generated for training the DL-CAD model for pneumonia with 6.7% accuracy improvement (with and without GAN: 0.990 and 0.928), respectively |
Kora Venu (2021)-USA [95] | X-ray | DCGAN | Retrospective | 1–5 years old children | Public CXR dataset by Guangzhou Women and Children’s Medical Center, China | Total: 5856 images-training: 4684; testing: 1172 | No | CXR image synthesis and data augmentation for DL-CAD model training | No | 80:20 random split | No | NR | No | 2152 images generated for training DL-CAD model for pneumonia with 2.6% AUC (with and without DCGAN: 0.993 and 0.968), 6.5% sensitivity (with and without DCGAN: 0.993 and 0.932), 13.5% PPV (with and without DCGAN: 0.990 and 0.872), 6.4% accuracy (with and without DCGAN: 0.987 and 0.928) and 10.0% F1 score improvements (with and without DCGAN: 0.991 and 0.901), respectively |
Li and Ke (2022)-USA [96] | X-ray | DCGAN | Retrospective | 1–5 years old children | Public CXR dataset by Guangzhou Women and Children’s Medical Center, China | Total: 5910 images-training: 4300; validation: 724; testing: 886 | No | CXR image synthesis and data augmentation for DL-CAD model training | No | 90:10 random split | No | NR | No | 2700 images generated for training DL-CAD model for pneumonia with 13.7% accuracy (with and without DCGAN: 0.960 and 0.844) and 1.1% AUC (with and without DCGAN: 0.994 and 0.983) improvements, respectively |
Prince et al. (2020)-Canada and USA [97] | CT and MRI | GAN | Retrospective | Children | Public (ATPC Consortium) and private brain CT-MRI datasets by Children’s Hospital Colorado and St. Jude Children’s Research Hospital, USA | Total: 86 CT-MRI scans-training: 53; testing: 33 | No | Brain CT-MRI image synthesis and data augmentation for DL-CAD model training | No | 60:40 random split and 5-fold cross-validation | No | Histology | Yes | 2000 CT and 2000 MRI images generated for training DL-CAD model for adamantinomatous craniopharyngioma with 0.890 (CT) and 0.974 (MRI) accuracy. 17.0% AUC improvement for MRI (radiologists: 0.833; GAN: 0.975) but 1.6% AUC reduction for CT (radiologists: 0.894; GAN: 0.880). |
Su et al. (2021)-China [98] | X-ray | WGAN | Retrospective | 1–19 years old children | Public hand X-ray dataset (RSNA Pediatric Bone Age Challenge) | Total: 14,236 images-training: 12,611; validation: 1425; testing: 200 | No | Hand X-ray image synthesis and data augmentation, and bone age assessment | No | NR | No | Manual assessment by expert clinicians | No | 11,350 images generated with 7.9 IS, 17.3 FID and 20.0% MAE reduction (CNN: 5.29 months; WGAN: 4.23 months) |
Szepesi and Szilágyi (2022)-Hungary and Romania [99] | X-ray | GAN | Retrospective | 1–5 years old children | Public CXR dataset by Guangzhou Women and Children’s Medical Center, China | Total: 5856 images-training: 4099; validation: 586; testing: 1171 | No | CXR image synthesis and data augmentation for DL-CAD model training | No | 10-fold cross-validation | No | Expert clinicians | No | 2152 images generated for training DL-CAD model for pneumonia with 0.9820 AUC, 0.9734 sensitivity, 0.9740 PPV, 0.9721 accuracy, and 3.9% F1 score improvement (CNN: 0.9375; GAN: 0.9740) |
Vetrimani et al. (2023)-India [100] | X-ray | DCGAN | Retrospective | 1–8 years old children | Public CXR datasets by Guangzhou Women and Children’s Medical Center, China and from various websites such as Radiopaedia | Total: 987 images-training: 645; validation: 342 | No | CXR image synthesis and data augmentation for DL-CAD model training | No | NR | No | NR | No | Additional images generated by DCGAN for training DL-CAD model for laryngotracheobronchitis with 0.8791 sensitivity, 0.854 PPV, 0.8832 accuracy and 0.8666 F1 score. |
Image Translation | ||||||||||||||
Chen et al. (2021)-China and USA [101] | MRI | 3D CycleGAN | Retrospective | Neonates | Private brain MRI datasets by Xi’an Jiaotong University, China and University of North Carolina, USA | Total: 40 images | No | Image translation (for domain adaptation in brain MRI image segmentation) | No | NR | No | NR | No | 1.2% mean DSC improvement (with and without 3D CycleGAN: 0.86 and 0.85) with 12.8% mean HD (with and without 3D CycleGAN: 13.03 and 14.94 mm) and 16.0% MSD (with and without 3D CycleGAN: 0.23 and 0.27 mm) reductions, respectively |
Hržić et al. (2021)-Austria, Croatia and Germany [102] | X-ray | CycleGAN | Retrospective | Children (mean age: 11 ± 4 years) | Private wrist X-ray dataset by Medical University of Graz, Austria | Total: 9672 images- training: 7600; validation: 636; testing: 1436 | No | Wrist X-ray image translation (cast suppression) | No | NR | No | Real castless wrist X-ray images | No | Real castless and CycleGAN generated cast suppressed image histogram similarity scores: 0.998 (correlation) and 222,503 (intersection) with difference values: 59,451 (chi-square distance) and 0.147 (Hellinger distance) |
Kaplan et al. (2022)-USA and Germany [103] | MRI | 3D CycleGAN | Prospective | Neonates (mean PMA: 41.1 ± 1.5 weeks) and infants (mean age: 41.2 ± 1.9 weeks) | Private brain MRI datasets by Washington University and ECHO Program, USA | Total: 137 scans-training: 107; testing: 30 | No | Brain MRI image translation (T1w-to-T2w) | No | NR | Yes | Real T2w MRI images acquired from same patients | No | 9.7% and 7.9% SSIM and DSC improvements (Kaplan-T2w: 0.72 and 0.76; CycleGAN: 0.79 and 0.82) with 18.8% relative MAE reduction (Kaplan-T2w: 6.9; CycleGAN: 5.6) and no statistically significant CNR difference (Kaplan-T2w: 0.76; CycleGAN: 0.63; original images: 0.62), respectively |
Khalili et al. (2019)-The Netherlands [104] | MRI | CycleGAN | NR | Neonates (mean PMA: 30.7 ± 1.0 weeks) | Private brain MRI dataset by University Medical Center Utrecht, The Netherlands | Total: 80 scans-training: 35; testing: 45 | No | Brain MRI image translation between motion blurred and blurless ones for training DL-segmentation model | No | NR | No | NR | No | 6.7% DSC improvement (with and without CycleGAN: 0.80 and 0.75) with 32.4% HD (with and without CycleGAN: 25.0 and 37.0 mm) and 60.5% MSD reductions (with and without CycleGAN: 0.5 and 1.3 mm) for segmentation, respectively. Median subjective image quality and segmentation accuracy ratings (scale 1–5): before (2 and 3) and after motion unsharpness correction (3 and 4), respectively |
Maspero et al. (2020)-The Netherlands [105] | MRI | 2D CGAN | Retrospective | 2.6–19 (mean: 10 ± 5) years old children | Private brain CT and T1w MRI dataset by University Medical Center Utrecht, The Netherlands | Total: 60 CT and MRI scans-training: 30; validation: 10; testing: 20 | No | Brain MRI image translation to CT for radiation therapy planning | No | 4-fold cross-validation | No | Real CT images acquired from same patients | No | DSC: 0.92; MAE: 61 HU for CT images generated from MRI images by CGAN |
Peng et al. (2020)-China, Japan and USA [106] | MRI | 3D GAN | Retrospective | 6–12 months old children | Public brain MRI dataset (Infant Brain Imaging Study) | Total: 578 scans-training: 462; validation: 58; testing: 58 | No | Brain MRI image translation between images acquired 6 months apart | No | NR | No | Real MRI images acquired from same patient 6 months apart | No | 1.5% DSC improvement (U-Net: 0.809; GAN: 0.821) and 7.5% MSD reduction (U-Net: 0.577 mm; GAN: 0.534 mm) but with 16.8% RVD increase (U-Net: 0.0424; GAN: 0.0495) |
Tang et al. (2019)-China and USA [107] | X-ray | CycleGAN | Retrospective | 1–5 years old children and adult | Public CXR datasets by Guangzhou Women and Children’s Medical Center, China and from RSNA Pneumonia Detection Challenge | Total: 17,508 images-training: 16,884; testing: 624 | No | Image translation (for domain adaptation of DL-CAD) | No | 5-fold cross-validation | No | NR | No | 7.8% AUC (with and without CycleGAN: 0.963 and 0.893), 11.1% sensitivity (with and without CycleGAN: 0.929 and 0.836), 12.7% specificity (with and without CycleGAN: 0.911 and 0.808), 12.8% accuracy (with and without CycleGAN: 0.931 and 0.825) and 8.1% F1 score (with and without CycleGAN: 0.930 and 0.860) improvements, respectively |
Tor-Diez et al. (2020)-USA [108] | MRI | CycleGAN | NR | Children | Private brain MRI datasets by Children’s National Hospital, Children’s Hospital of Philadelphia and Children’s Hospital of Colorado, USA | Total: 18 scans | No | Image translation (for domain adaptation in brain MRI image segmentation) | No | Leave-one-out cross-validation | No | NR | No | 18.3% DSC improvement for anterior visual pathway segmentation (U-Net: 0.509; CycleGAN: 0.602) |
Wang et al. (2021)-USA [109] | MRI | CycleGAN | Retrospective | 2 groups of children (median ages: 8.3 and 6.4 years; ranges: 1–20 and 2–14 years), respectively | Private brain CT and T1w MRI datasets by St Jude Children’s Research Hospital, USA | Total: 132 CT and MRI scans-training: 125; testing: 7 | No | Brain MRI image translation to CT for radiation therapy planning | No | NR | No | Real CT images acquired from same patients | No | SSIM: 0.90; DSC of air/bone: 0.86/0.81; MAE: 65.3 HU; PSNR: 28.5 dB for CT images generated from MRI images by CycleGAN |
Wang et al. (2021)-USA [110] | MRI | CycleGAN | Retrospective | 1.1–21.3 years old children and adult | Private brain and pelvic CT and MRI datasets by St Jude Children’s Research Hospital, USA | Total: 141 CT and MRI scans; training: 136; testing: 5 | No | Pelvic MRI image translation to CT for radiation therapy planning | No | NR | No | Real CT images acquired from same patients | No | Mean SSIM: 0.93 and 0.93; MAE: 52.4 and 85.4 HU; ME: −3.4 and −6.6 HU; PSNR: 30.6 and 29.2 dB for CT images generated from T1w and T2w MRI images by CycleGAN, respectively |
Wang et al. (2022)-USA [111] | MRI | CycleGAN | Retrospective | 1.1–20.3 (median: 9.0) years old children and adult | Private brain CT and MRI datasets by St. Jude Children’s Research Hospital, USA | Total: 195 CT and MRI scans-training: 150; testing: 45 | No | Brain MRI image translation to CT and RPSP images for radiation therapy planning | No | NR | No | Real CT images acquired from same patients | No | SSIM: 0.92 and 0.91; DSC of air/bone: 0.98/0.83 and 0.97/0.85 MAE: 44.1 and 42.4 HU; ME: 8.6 and 18.8 HU; PSNR: 32.6 and 31.5 dB for CT images generated from T1w and T2w MRI images by CycleGAN, respectively |
Zhao et al. (2019)-China and USA [112] | MRI | CycleGAN | Retrospective | 0–2 years old children | Public brain MRI dataset (UNC/UMN Baby Connectome Project) | Total: 360 scans-training: 252; testing: 108 | No | Image translation (for domain adaptation) | No | NR | No | Original MRI images | No | 14.1% PSNR improvement (non-DL: 29.00 dB; CycleGAN: 33.09 dB) and 33.9% MAE reduction (non-DL: 0.124; CycleGAN: 0.082) for domain adaptation |
Other | ||||||||||||||
Mostapha et al. (2019)-USA [113] | MRI | 3D DCGAN | Retrospective | 1–6-year-old children | Public brain MRI datasets (UNC/UMN Baby Connectome Project and UNC Early Brain Development Study) | Total: 2187 scans | No | Automatic brain MRI image quality assessment | No | 80:20 random split | No | Manual image quality assessment by MRI experts | No | 92.9% sensitivity (VAE: 0.42; DCGAN: 0.81), 2.2% specificity (VAE: 0.93; DCGAN: 0.95), and 47.6% accuracy (VAE: 0.63; DCGAN: 0.93) improvements for automatic image quality assessment, respectively |
2D, two-dimensional; 3D, three-dimensional; AC-GAN, auxiliary classifier generative adversarial network; ACDC, Automated Cardiac Diagnosis Challenge of 2017 Medical Image Computing and Computer Assisted Intervention; ADHD, attention deficit hyperactivity disorder; AI, artificial intelligence; AIGAN, attention-encoding integrated generative adversarial network; ATPC, Advancing Treatment for Pediatric Craniopharyngioma; AUC, area under the receiver operating characteristic curve; CAD, computer-aided detection and diagnosis; CGAN, conditional generative adversarial network; CNN, convolutional neural network; CNR, contrast-to-noise ratio; cvi42, a commercial deep learning-based segmentation product (Circle Cardiovascular Imaging, Calgary, Alberta, Canada); CT, computed tomography; CXR, chest X-ray; CycleGAN, cycle-consistent generative adversarial network; DCGAN, deep convolutional generative adversarial network; DL, deep learning; DNGAN, dual network generative adversarial network; DSC, Dice similarity coefficient; ECHO, Environmental Influences on Child Health Outcomes; FID, Fréchet inception distance; HD, Hausdorff distance; HU, Hounsfield unit; IQR, interquartile range; IS, inception score; Kaplan-T2w, a registration-based method for T1w-to-T2w translation; LDCT, low-dose computed tomography; MAE, mean absolute error; MCD, mean contour distance; ME, voxel-based mean error; MRI, magnetic resonance imaging; MSD, mean surface distance; NPV, negative predictive value; NR, not reported; PET, positron emission tomography; PG-GAN, progressive generative adversarial network; PMA, postmenstrual age; PPV, positive predictive value; PSNR, peak signal to noise ratio; R2, coefficient of determination; RPSP, relative proton stopping power; RSNA, Radiological Society of North America; RVD, relative volume difference; SAFIRE, sinogram affirmed iterative reconstruction; SDAM, spatial deformable aggregation module; SNR, signal-to-noise ratio; SRCNN, super-resolution convolutional neural network; SSIM, structural index similarity; SVDD, support vector data description; T1w, T1-weighted; T2w, T2-weighted; TransGAN, transformer-based generative adversarial network; UMN, University of Minnesota; UNC, University of North Carolina; US, ultrasound; VAE, variational autoencoder; VSMD, voxel-scale metabolic difference; WGAN, Wasserstein generative adversarial network.
Both image synthesis and data augmentation (n = 12) [89,90,91,92,93,94,95,96,97,98,99,100], and image translation (n = 12) [101,102,103,104,105,106,107,108,109,110,111,112] were the commonest application areas of GAN in pediatric radiology. Other GAN application areas included image segmentation (n = 5) [84,85,86,87,88], image reconstruction (n = 4) [80,81,82,83], disease diagnosis (n = 3) [77,78,79], and image quality assessment (n = 1) [113]. However, none of the GAN models involved in these studies were commercially available [77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113]. For the twenty-nine studies which compared their GAN model performances with those of other approaches, all of them outperformed the others by 0.1–158.6% [77,78,79,80,81,82,83,84,85,86,87,88,89,91,92,94,95,96,97,98,99,101,103,104,106,107,108,112,113]. The highest accuracy and AUC of GAN-based disease diagnosis were 0.978 [78] and 0.900 [79] for brain MRI-based autism diagnosis, respectively. The performances of GAN-based image reconstruction were as far as 0.991 structural index similarity (SSIM) and 38.36 dB peak signal-to-noise ratio (PSNR) for super-resolution in chest X-ray (CXR) [80], and 31.9 signal-to-noise ratio (SNR) and 21.2 contrast-to-noise ratio (CNR) for abdominal CT denoising [82]. For the top performing GAN-based image segmentation models, 0.929 Dice similarity coefficient (DSC) and 0.338 mm Hausdorff distance (HD) for prostate CT segmentation [86], 0.86 Jaccard index, 0.92 sensitivity and 0.94 PPV for echocardiography segmentation [85], and 0.998 specificity and NPV for cardiac MRI segmentation were achieved [87]. The GAN-based image synthesis and data augmentation for training models of DL-CAD of pneumonia based on CXR boosted the AUC, sensitivity, PPV, F1 score, specificity, and accuracy up to 0.994 [96], 0.993, 0.990, 0.991, [95], 0.97 [92] and 0.990 [94], respectively. The use of GAN for image translation from brain MRI to CT images achieved as far as 0.93 SSIM [110], 0.98 DSC, 32.6 dB PSNR and 42.4 Hounsfield unit (HU) mean absolute error (MAE) [111]. For GAN-based domain adaptation (image translation) in brain MRI segmentation, up to 0.86 DSC, 13.03 mm HD, and 0.23 mm MSD were attained [101]. The application of GAN in automatic image quality assessment yielded 0.81 sensitivity, 0.95 specificity, and 0.93 accuracy [113]. Table 4 summarizes these key findings.
Collectively, the included studies covered pediatric patients aged from 0 to 21 years [77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114]. Their average dataset size for GAN model development was 5799 images (range: 40–17,508 images) [79,80,82,83,89,91,92,94,95,96,98,99,100,101,102,107]/241 scans (range: 5–2187 scans) [77,78,81,84,85,86,87,88,90,93,97,103,104,105,106,108,109,110,111,112,113]. However, no study calculated the required sample size [77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113]. Except for two studies that collected both public and private datasets [83,97], and one not reporting the dataset source [93], half of the rest (n = 17) used public datasets [77,78,79,80,87,89,91,94,95,96,98,99,100,106,107,112,113], and the other half (n = 17) collected their own data [81,82,84,85,86,88,90,92,101,102,103,104,105,108,109,110,111]. The most popular public dataset was the chest X-ray dataset consisting of 1741 normal and 4346 pneumonia images of 6087 1–5-year-old children collected from the Guangzhou Women and Children’s Medical Center, China which was used in 10 studies [79,80,89,91,94,95,96,99,100,107].
Nonetheless, about 80% of the included studies (n = 29) were retrospective [77,78,79,80,81,82,83,84,87,89,90,91,94,95,96,97,98,99,100,101,102,105,106,107,109,110,111,112,113], and only three were prospective [88,92,103] with the other five not reporting the study design [85,86,93,104,108]. Additionally, about two-thirds of the studies (n = 23) did not report the approach for their model internal validation [77,78,79,82,83,85,88,89,90,91,92,93,98,100,101,102,103,104,106,109,110,111,112], and just more than one-fifth (n = 8) used the cross-validation to address the small sample size issue [81,84,86,97,99,105,107,108]. Around 90% of studies did not conduct external validation for their models (n = 32) [77,78,79,80,81,85,86,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,104,105,106,107,108,109,110,111,112,113], and compare their model performances with those of clinicians (n = 34) [77,78,79,80,81,83,84,85,86,87,88,89,91,92,93,94,95,96,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113]. Besides, the reference standard for ground truth establishment was not stated in around half of the included papers (n = 17) [77,78,79,83,85,86,89,91,92,93,95,96,100,101,104,107,108].
4. Discussion
This article is the first systematic review of the generative AI framework, GAN in pediatric radiology covering MRI [77,78,83,84,87,90,97,101,103,104,105,106,108,109,110,111,112,113], X-ray [79,80,89,91,92,94,95,96,98,99,100,102,107], CT [82,86,93,97], ultrasound [85,88], and PET [81]. Hence, it advances the previous literature reviews about general AI applications [67], and specific uses in radiation dose optimization [26], CAD [29], and chest imaging [28] in pediatric radiology published between 2021 and 2023 which did not focus on the GAN. Unsurprisingly, more than 80% of the studies applied the GAN to MRI and X-ray due to multiplanar imaging capability and excellent soft-tissue contrast of MRI, and less operator dependent and no/low radiation dose for both, resulting in their popularity in pediatric radiology [26,115,116]. Also, it is within expectation that the basic GAN architecture was the most commonly used architecture because it became available earlier than its variants [56,59,63]. The commonest use of basic GAN was for image synthesis and data augmentation [77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113], which was also one of the most popular GAN applications in the included studies [89,90,91,92,93,94,95,96,97,98,99,100]. These align with the original purpose of the basic GAN which was for the creation of new and original images [63]. CycleGAN was the second most common GAN architecture used in the included studies as the strength of CycleGAN is for image translation without the use of a paired training dataset [62,101,102,109]. A closer look at the findings presented in Table 3 reveals all but two image translation studies used the CycleGAN [101,102,103,104,107,108,109,110,111,112]. It is always challenging to obtain paired datasets to train GAN models for various image translation tasks [102,109]. For example, it is often unrealistic to perform both MRI and CT examinations on the same pediatric patients, resulting in the unavailability of a paired MRI-CT dataset required for training the basic GAN to achieve the image translation from MRI to CT. However, CycleGAN overcomes this issue by using two generators and two discriminators to convert MRI to CT images and vice versa (known as inverse transformation) for creating pseudo image pairs to accomplish the image translation training. In this way, the data collection task becomes easier as only individual MRI and CT images from different patients are required [62,109].
About 80% of the included studies compared their GAN model performances with those of other approaches for benchmarking and indicated that their GAN models outperformed the others [77,78,79,80,81,82,83,84,85,86,87,88,89,91,92,94,95,96,97,98,99,101,103,104,106,107,108,112,113]. Additionally, the absolute performance figures of the best-performing GAN models appear competitive with the other state-of-the-art approaches [77,78,80,82,85,86,87,92,94,95,96,101,110,111,113]. However, the findings from these studies should be used with caution because of the following methodological weaknesses [29]. No study calculated the required sample size for the GAN model development [77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113]. The sizes of the datasets used were as low as 40 images [83,101]/5 scans [93]. Although the cross-validation internal validation approach can address the small dataset issue to some extent [29], only one-fifth of them used this approach [81,84,86,97,99,105,107,108]. Additionally, just a quarter of the studies covered a wide age range of pediatric patients [84,86,87,93,98,105,109,110,111]. It is well known that there is a lack of generalization ability of many existing DL models because they are only trained by a limited number and variety of patient data [26,50,117]. The variety issue of the included studies was compounded by the retrospective nature of about 80% of them [77,78,79,80,81,82,83,84,87,89,90,91,94,95,96,97,98,99,100,101,102,105,106,107,109,110,111,112,113], and around 60% of these retrospective studies used public datasets which further limited the data variation [77,78,79,80,87,89,91,94,95,96,98,99,100,106,107,112,113]. The most popular public dataset used in the included studies was the one from the Guangzhou Women and Children’s Medical Center, China [79,80,89,91,94,95,96,99,100,107]. However, it is important to note that this public dataset has several image quality issues that could affect the DL model training and hence the eventual performance [118,119]. Hence, the performances of the GAN models covered in this review might not be realized in other settings [26,50,117].
It is noted that no GAN model of the included studies was commercially available [77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113]. Again, it is within expectation because the GAN has only emerged for 10 years. In contrast, another common DL architecture in medical imaging, convolutional neural network (CNN) which is a deductive AI technique has been available since the 1980s and hence some commercial companies have already used it for developing various products such as Canon Medical Systems Advanced Intelligent Clear-IQ Engine (AiCE) (Tochigi, Japan), General Electric Healthcare TrueFidelity (Chicago, IL, USA), ClariPI ClariCT.AI (Seoul, Republic of Korea), Samsung Electronics Co., Ltd. SimGrid (Suwon-si, Republic of Korea) and Subtle Medical SubtlePET 1.3 (Menlo Park, CA, USA) for radiation dose optimization (denoising) in pediatric CT, X-ray and PET, respectively [1,26].
As a result of the increasing number of GAN publications in pediatric radiology and the popularity of another generative AI application, Chat Generative Pre-Trained Transformer (ChatGPT), it is expected that the GAN would attract the attention of commercial companies to consider using it to develop various applications in pediatric radiology in the future [54,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113]. However, based on the previous trend of CNN-based commercial product development for pediatric radiology, such GAN-based commercial solutions should not be available in the coming few years [1,26].
Even when the GAN-based applications are on the market, after several years, developers should disclose their model external validation results, reference standards used for the validation, and their model performances against those of the clinicians on the same tasks for attracting potential customers [29,73,74,120]. According to Table 3, around 90% of the included studies did not conduct external validation for their models [77,78,79,80,81,85,86,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,104,105,106,107,108,109,110,111,112,113] and compare their model performances with those of clinicians [77,78,79,80,81,83,84,85,86,87,88,89,91,92,93,94,95,96,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113]. Besides, the reference standard for ground truth establishment was not stated in around half of the included papers [77,78,79,83,85,86,89,91,92,93,95,96,100,101,104,107,108]. Hence, it would be difficult to earn the pediatric clinicians’ trust in the GAN-based applications for image translation, segmentation, reconstruction, quality assessment, synthesis and data augmentation, and disease diagnosis as there is a lack of trustworthy findings to convince them [77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113].
There are two major limitations in this systematic review. A single author, despite having experience in performing literature reviews for more than 20 years, selected articles, and extracted and synthesized data [26,29]. As per a recent methodological systematic review, this arrangement is appropriate as the single reviewer is experienced [26,29,70,121,122,123]. Additionally, the potential bias would be addressed to a certain degree due to the use of PRISMA guidelines, data extraction form (Table 3) developed based on the recent systematic reviews on GAN for image classification and segmentation in radiology, and AI for radiation dose optimization and CAD in pediatric radiology, and one narrative review about GAN in adult brain imaging, and QATSDD [26,29,56,62,69,76]. In addition, only English papers were included and this could potentially affect the systematic review comprehensiveness [26,29,72,124,125,126]. Nevertheless, a wider range of applications of GAN in pediatric radiology has been covered in this review when compared with the previous review papers [26,28,29,67].
5. Conclusions
This systematic review shows that the GAN can be applied to pediatric MRI, X-ray, CT, ultrasound, and PET for image translation, segmentation, reconstruction, quality assessment, synthesis and data augmentation, and disease diagnosis. About 80% of the included studies compared their GAN model performances with those of other approaches and indicated that their GAN models outperformed the others by 0.1–158.6%. Also, the absolute performance figures of the best-performing models appear competitive with the other state-of-the-art approaches. However, these study findings should be used with caution because of a number of methodological weaknesses including no sample size calculation, small dataset size, narrow data variety, limited use of cross-validation, patient cohort coverage and disclosure of reference standards, retrospective data collection, overreliance on public dataset, lack of model external validation and model performance comparison with pediatric clinicians. More robust methods will be necessary in future GAN studies for addressing the aforementioned methodological issues. Otherwise, trustworthy findings for the commercialization of these models could not be obtained. Additionally, this would affect the clinical adoption of the GAN-based applications in pediatric radiology and the potential advantages of GAN would not be realized widely.
Not applicable.
Not applicable.
Not applicable.
The author declares no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1. PRISMA flow diagram for the systematic review of the generative adversarial network (GAN) in pediatric radiology.
Patient/population, intervention, comparison, and outcome table for the systematic review of the generative adversarial network (GAN) in pediatric radiology.
Patient/Population | Pediatric patients aged from 0 to 21 years |
Intervention | Use of GAN to accomplish tasks involved in pediatric radiology |
Comparison | GAN versus other approaches to accomplish the same task in pediatric radiology |
Outcome | Performance of task accomplishment |
Article inclusion and exclusion criteria.
Inclusion Criteria | Exclusion Criteria |
---|---|
|
|
Absolute performance figures of best-performing generative adversarial network (GAN) models for various applications in pediatric radiology.
GAN Application | Best Model Performance |
---|---|
Disease diagnosis | 0.978 accuracy and 0.900 AUC |
Image quality assessment | 0.81 sensitivity, 0.95 specificity, and 0.93 accuracy |
Image reconstruction | 0.991 SSIM, 38.36 dB PSNR, 31.9 SNR and 21.2 CNR |
Image segmentation | 0.929 DSC, 0.338 mm HD, 0.86 Jaccard index, 0.92 sensitivity, 0.998 specificity and NPV, and 0.94 PPV |
Image synthesis and data augmentation for DL-CAD performance enhancement | 0.994 AUC, 0.993 sensitivity, 0.990 PPV, 0.991 F1 score, 0.97 specificity, and 0.990 accuracy |
Image translation | 0.93 SSIM, 0.98 DSC, 32.6 dB PSNR, 42.4 HU MAE, 13.03 mm HD and 0.23 mm MSD |
AUC, area under the receiver operating characteristic curve; CAD, computer-aided detection and diagnosis; CNR, contrast-to-noise ratio; DL, deep learning; DSC, Dice similarity coefficient; HD, Hausdorff distance; MAE, mean absolute error; MSD, mean surface distance; NPV, negative predictive value; PPV, positive predictive value; PSNR, peak signal to noise ratio; SNR, signal-to-noise ratio; SSIM, structural index similarity.
References
1. Wolterink, J.M.; Mukhopadhyay, A.; Leiner, T.; Vogl, T.J.; Bucher, A.M.; Išgum, I. Generative Adversarial Networks: A Primer for Radiologists. RadioGraphics; 2021; 41, pp. 840-857. [DOI: https://dx.doi.org/10.1148/rg.2021200151] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33891522]
2. Pesapane, F.; Codari, M.; Sardanelli, F. Artificial intelligence in medical imaging: Threat or opportunity? Radiologists again at the forefront of innovation in medicine. Eur. Radiol. Exp.; 2018; 2, 35. [DOI: https://dx.doi.org/10.1186/s41747-018-0061-6]
3. Ng, C.K.C.; Sun, Z.; Jansen, S. Comparison of performance of micro-computed tomography (Micro-CT) and synchrotron radiation CT in assessing coronary stenosis caused by calcified plaques in coronary artery phantoms. J. Vasc. Dis.; 2023; in press
4. Parczewski, M.; Kufel, J.; Aksak-Wąs, B.; Piwnik, J.; Chober, D.; Puzio, T.; Lesiewska, L.; Białkowski, S.; Rafalska-Kosior, M.; Wydra, J. et al. Artificial neural network based prediction of the lung tissue involvement as an independent in-hospital mortality and mechanical ventilation risk factor in COVID-19. J. Med. Virol.; 2023; 95, e28787. [DOI: https://dx.doi.org/10.1002/jmv.28787] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37219059]
5. Althaqafi, T.; Al-Ghamdi, A.S.A.-M.; Ragab, M. Artificial Intelligence Based COVID-19 Detection and Classification Model on Chest X-ray Images. Healthcare; 2023; 11, 1204. [DOI: https://dx.doi.org/10.3390/healthcare11091204]
6. Alsharif, W.; Qurashi, A. Effectiveness of COVID-19 diagnosis and management tools: A review. Radiography; 2021; 27, pp. 682-687. [DOI: https://dx.doi.org/10.1016/j.radi.2020.09.010] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33008761]
7. Alzubaidi, M.; Zubaydi, H.D.; Bin-Salem, A.A.; Abd-Alrazaq, A.A.; Ahmed, A.; Househ, M. Role of deep learning in early detection of COVID-19: Scoping review. Comput. Methods Programs Biomed. Updat.; 2021; 1, 100025. [DOI: https://dx.doi.org/10.1016/j.cmpbup.2021.100025]
8. Kufel, J.; Bargieł, K.; Koźlik, M.; Czogalik, Ł.; Dudek, P.; Jaworski, A.; Cebula, M.; Gruszczyńska, K. Application of artificial intelligence in diagnosing COVID-19 disease symptoms on chest X-rays: A systematic review. Int. J. Med. Sci.; 2022; 19, pp. 1743-1752. [DOI: https://dx.doi.org/10.7150/ijms.76515]
9. Pang, S.; Wang, S.; Rodríguez-Patón, A.; Li, P.; Wang, X. An artificial intelligent diagnostic system on mobile Android terminals for cholelithiasis by lightweight convolutional neural network. PLoS ONE; 2019; 14, e0221720. [DOI: https://dx.doi.org/10.1371/journal.pone.0221720]
10. Kufel, J.; Bargieł, K.; Koźlik, M.; Czogalik, Ł.; Dudek, P.; Jaworski, A.; Magiera, M.; Bartnikowska, W.; Cebula, M.; Nawrat, Z. et al. Usability of Mobile Solutions Intended for Diagnostic Images—A Systematic Review. Healthcare; 2022; 10, 2040. [DOI: https://dx.doi.org/10.3390/healthcare10102040]
11. Verma, A.; Amin, S.B.; Naeem, M.; Saha, M. Detecting COVID-19 from chest computed tomography scans using AI-driven android application. Comput. Biol. Med.; 2022; 143, 105298. [DOI: https://dx.doi.org/10.1016/j.compbiomed.2022.105298]
12. Patel, B.; Makaryus, A.N. Artificial Intelligence Advances in the World of Cardiovascular Imaging. Healthcare; 2022; 10, 154. [DOI: https://dx.doi.org/10.3390/healthcare10010154] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35052317]
13. Brady, S.L.; Trout, A.T.; Somasundaram, E.; Anton, C.G.; Li, Y.; Dillman, J.R. Improving Image Quality and Reducing Radiation Dose for Pediatric CT by Using Deep Learning Reconstruction. Radiology; 2021; 298, pp. 180-188. [DOI: https://dx.doi.org/10.1148/radiol.2020202317] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33201790]
14. Jeon, P.-H.; Kim, D.; Chung, M.-A. Estimates of the image quality in accordance with radiation dose for pediatric imaging using deep learning CT: A phantom study. Proceedings of the 2022 IEEE International Conference on Big Data and Smart Computing (BigComp); Daegu, Republic of Korea, 17–22 January 2022; pp. 352-356. [DOI: https://dx.doi.org/10.1109/bigcomp54360.2022.00078]
15. Krueger, P.-C.; Ebeling, K.; Waginger, M.; Glutig, K.; Scheithauer, M.; Schlattmann, P.; Proquitté, H.; Mentzel, H.-J. Evaluation of the post-processing algorithms SimGrid and S-Enhance for paediatric intensive care patients and neonates. Pediatr. Radiol.; 2022; 52, pp. 1029-1037. [DOI: https://dx.doi.org/10.1007/s00247-021-05279-2]
16. Lee, S.; Choi, Y.H.; Cho, Y.J.; Lee, S.B.; Cheon, J.-E.; Kim, W.S.; Ahn, C.K.; Kim, J.H. Noise reduction approach in pediatric abdominal CT combining deep learning and dual-energy technique. Eur. Radiol.; 2021; 31, pp. 2218-2226. [DOI: https://dx.doi.org/10.1007/s00330-020-07349-9]
17. Nagayama, Y.; Goto, M.; Sakabe, D.; Emoto, T.; Shigematsu, S.; Oda, S.; Tanoue, S.; Kidoh, M.; Nakaura, T.; Funama, Y. et al. Radiation Dose Reduction for 80-kVp Pediatric CT Using Deep Learning-Based Reconstruction: A Clinical and Phantom Study. AJR Am. J. Roentgenol.; 2022; 219, pp. 315-324. [DOI: https://dx.doi.org/10.2214/AJR.21.27255]
18. Sun, J.; Li, H.; Li, H.; Li, M.; Gao, Y.; Zhou, Z.; Peng, Y. Application of deep learning image reconstruction algorithm to improve image quality in CT angiography of children with Takayasu arteritis. J. X-ray Sci. Technol.; 2022; 30, pp. 177-184. [DOI: https://dx.doi.org/10.3233/XST-211033] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34806646]
19. Sun, J.; Li, H.; Li, J.; Yu, T.; Li, M.; Zhou, Z.; Peng, Y. Improving the image quality of pediatric chest CT angiography with low radiation dose and contrast volume using deep learning image reconstruction. Quant. Imaging Med. Surg.; 2021; 11, pp. 3051-3058. [DOI: https://dx.doi.org/10.21037/qims-20-1158]
20. Sun, J.; Li, H.; Li, J.; Cao, Y.; Zhou, Z.; Li, M.; Peng, Y. Performance evaluation of using shorter contrast injection and 70 kVp with deep learning image reconstruction for reduced contrast medium dose and radiation dose in coronary CT angiography for children: A pilot study. Quant. Imaging Med. Surg.; 2021; 11, pp. 4162-4171. [DOI: https://dx.doi.org/10.21037/qims-20-1159]
21. Sun, J.; Li, H.; Gao, J.; Li, J.; Li, M.; Zhou, Z.; Peng, Y. Performance evaluation of a deep learning image reconstruction (DLIR) algorithm in “double low” chest CTA in children: A feasibility study. Radiol. Med.; 2021; 126, pp. 1181-1188. [DOI: https://dx.doi.org/10.1007/s11547-021-01384-2]
22. Sun, J.; Li, H.; Wang, B.; Li, J.; Li, M.; Zhou, Z.; Peng, Y. Application of a deep learning image reconstruction (DLIR) algorithm in head CT imaging for children to improve image quality and lesion detection. BMC Med. Imaging; 2021; 21, 108. [DOI: https://dx.doi.org/10.1186/s12880-021-00637-w] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34238229]
23. Theruvath, A.J.; Siedek, F.; Yerneni, K.; Muehe, A.M.; Spunt, S.L.; Pribnow, A.; Moseley, M.; Lu, Y.; Zhao, Q.; Gulaka, P. et al. Validation of Deep Learning-based Augmentation for Reduced 18F-FDG Dose for PET/MRI in Children and Young Adults with Lymphoma. Radiol. Artif. Intell.; 2021; 3, e200232. [DOI: https://dx.doi.org/10.1148/ryai.2021200232]
24. Yoon, H.; Kim, J.; Lim, H.J.; Lee, M.-J. Image quality assessment of pediatric chest and abdomen CT by deep learning reconstruction. BMC Med. Imaging; 2021; 21, 146. [DOI: https://dx.doi.org/10.1186/s12880-021-00677-2]
25. Zhang, K.; Shi, X.; Xie, S.-S.; Sun, J.-H.; Liu, Z.-H.; Zhang, S.; Song, J.-Y.; Shen, W. Deep learning image reconstruction in pediatric abdominal and chest computed tomography: A comparison of image quality and radiation dose. Quant. Imaging Med. Surg.; 2022; 12, pp. 3238-3250. [DOI: https://dx.doi.org/10.21037/qims-21-936] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35655845]
26. Ng, C.K.C. Artificial Intelligence for Radiation Dose Optimization in Pediatric Radiology: A Systematic Review. Children; 2022; 9, 1044. [DOI: https://dx.doi.org/10.3390/children9071044] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35884028]
27. Helm, E.J.; Silva, C.T.; Roberts, H.C.; Manson, D.; Seed, M.T.M.; Amaral, J.G.; Babyn, P.S. Computer-aided detection for the identification of pulmonary nodules in pediatric oncology patients: Initial experience. Pediatr. Radiol.; 2009; 39, pp. 685-693. [DOI: https://dx.doi.org/10.1007/s00247-009-1259-9] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/19418048]
28. Schalekamp, S.; Klein, W.M.; van Leeuwen, K.G. Current and emerging artificial intelligence applications in chest imaging: A pediatric perspective. Pediatr. Radiol.; 2022; 52, pp. 2120-2130. [DOI: https://dx.doi.org/10.1007/s00247-021-05146-0]
29. Ng, C.K.C. Diagnostic Performance of Artificial Intelligence-Based Computer-Aided Detection and Diagnosis in Pediatric Radiology: A Systematic Review. Children; 2023; 10, 525. [DOI: https://dx.doi.org/10.3390/children10030525]
30. Nam, J.G.; Park, S.; Hwang, E.J.; Lee, J.H.; Jin, K.-N.; Lim, K.Y.; Vu, T.H.; Sohn, J.H.; Hwang, S.; Goo, J.M. et al. Development and Validation of Deep Learning-based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs. Radiology; 2019; 290, pp. 218-228. [DOI: https://dx.doi.org/10.1148/radiol.2018180237]
31. Hwang, E.J.; Park, S.; Jin, K.-N.; Kim, J.I.; Choi, S.Y.; Lee, J.H.; Goo, J.M.; Aum, J.; Yim, J.-J.; Park, C.M. et al. Development and Validation of a Deep Learning-based Automatic Detection Algorithm for Active Pulmonary Tuberculosis on Chest Radiographs. Clin. Infect. Dis.; 2019; 69, pp. 739-747. [DOI: https://dx.doi.org/10.1093/cid/ciy967]
32. Hwang, E.J.; Park, S.; Jin, K.-N.; Kim, J.I.; Choi, S.Y.; Lee, J.H.; Goo, J.M.; Aum, J.; Yim, J.-J.; Cohen, J.G. et al. Development and Validation of a Deep Learning-Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs. JAMA Netw. Open; 2019; 2, e191095. [DOI: https://dx.doi.org/10.1001/jamanetworkopen.2019.1095] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30901052]
33. Liang, C.-H.; Liu, Y.-C.; Wu, M.-T.; Garcia-Castro, F.; Alberich-Bayarri, A.; Wu, F.-Z. Identifying pulmonary nodules or masses on chest radiography using deep learning: External validation and strategies to improve clinical practice. Clin. Radiol.; 2020; 75, pp. 38-45. [DOI: https://dx.doi.org/10.1016/j.crad.2019.08.005] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31521323]
34. Singh, R.; Kalra, M.K.; Nitiwarangkul, C.; Patti, J.A.; Homayounieh, F.; Padole, A.; Rao, P.; Putha, P.; Muse, V.V.; Sharma, A. et al. Deep learning in chest radiography: Detection of findings and presence of change. PLoS ONE; 2018; 13, e0204155. [DOI: https://dx.doi.org/10.1371/journal.pone.0204155] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30286097]
35. Mushtaq, J.; Pennella, R.; Lavalle, S.; Colarieti, A.; Steidler, S.; Martinenghi, C.M.A.; Palumbo, D.; Esposito, A.; Rovere-Querini, P.; Tresoldi, M. et al. Initial chest radiographs and artificial intelligence (AI) predict clinical outcomes in COVID-19 patients: Analysis of 697 Italian patients. Eur. Radiol.; 2021; 31, pp. 1770-1779. [DOI: https://dx.doi.org/10.1007/s00330-020-07269-8]
36. Qin, Z.Z.; Sander, M.S.; Rai, B.; Titahong, C.N.; Sudrungrot, S.; Laah, S.N.; Adhikari, L.M.; Carter, E.J.; Puri, L.; Codlin, A.J. et al. Using artificial intelligence to read chest radiographs for tuberculosis detection: A multi-site evaluation of the diagnostic accuracy of three deep learning systems. Sci. Rep.; 2019; 9, 15000. [DOI: https://dx.doi.org/10.1038/s41598-019-51503-3]
37. Dellios, N.; Teichgraeber, U.; Chelaru, R.; Malich, A.; Papageorgiou, I.E. Computer-aided Detection Fidelity of Pulmonary Nodules in Chest Radiograph. J. Clin. Imaging Sci.; 2017; 7, 8. [DOI: https://dx.doi.org/10.4103/jcis.JCIS_75_16]
38. Kligerman, S.; Cai, L.P.; White, C.S. The Effect of Computer-aided Detection on Radiologist Performance in the Detection of Lung Cancers Previously Missed on a Chest Radiograph. J. Thorac. Imaging; 2013; 28, pp. 244-252. [DOI: https://dx.doi.org/10.1097/RTI.0b013e31826c29ec]
39. Schalekamp, S.; van Ginneken, B.; Koedam, E.; Snoeren, M.M.; Tiehuis, A.M.; Wittenberg, R.; Karssemeijer, N.; Schaefer-Prokop, C.M. Computer-aided Detection Improves Detection of Pulmonary Nodules in Chest Radiographs beyond the Support by Bone-suppressed Images. Radiology; 2014; 272, pp. 252-261. [DOI: https://dx.doi.org/10.1148/radiol.14131315]
40. Sim, Y.; Chung, M.J.; Kotter, E.; Yune, S.; Kim, M.; Do, S.; Han, K.; Kim, H.; Yang, S.; Lee, D.-J. et al. Deep Convolutional Neural Network-based Software Improves Radiologist Detection of Malignant Lung Nodules on Chest Radiographs. Radiology; 2020; 294, pp. 199-209. [DOI: https://dx.doi.org/10.1148/radiol.2019182465]
41. Murphy, K.; Habib, S.S.; Zaidi, S.M.A.; Khowaja, S.; Khan, A.; Melendez, J.; Scholten, E.T.; Amad, F.; Schalekamp, S.; Verhagen, M. et al. Computer aided detection of tuberculosis on chest radiographs: An evaluation of the CAD4TB v6 system. Sci. Rep.; 2020; 10, 5492. [DOI: https://dx.doi.org/10.1038/s41598-020-62148-y]
42. Murphy, K.; Smits, H.; Knoops, A.J.G.; Korst, M.B.J.M.; Samson, T.; Scholten, E.T.; Schalekamp, S.; Schaefer-Prokop, C.M.; Philipsen, R.H.H.M.; Meijers, A. et al. COVID-19 on Chest Radiographs: A Multireader Evaluation of an Artificial Intelligence System. Radiology; 2020; 296, pp. E166-E172. [DOI: https://dx.doi.org/10.1148/radiol.2020201874] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32384019]
43. Park, S.; Lee, S.M.; Lee, K.H.; Jung, K.-H.; Bae, W.; Choe, J.; Seo, J.B. Deep learning-based detection system for multiclass lesions on chest radiographs: Comparison with observer readings. Eur. Radiol.; 2020; 30, pp. 1359-1368. [DOI: https://dx.doi.org/10.1007/s00330-019-06532-x] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31748854]
44. Jacobs, C.; van Rikxoort, E.M.; Murphy, K.; Prokop, M.; Schaefer-Prokop, C.M.; van Ginneken, B. Computer-aided detection of pulmonary nodules: A comparative study using the public LIDC/IDRI database. Eur. Radiol.; 2016; 26, pp. 2139-2147. [DOI: https://dx.doi.org/10.1007/s00330-015-4030-7] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26443601]
45. Setio, A.A.A.; Traverso, A.; de Bel, T.; Berens, M.S.; Bogaard, C.v.D.; Cerello, P.; Chen, H.; Dou, Q.; Fantacci, M.E.; Geurts, B. et al. Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: The LUNA16 challenge. Med. Image Anal.; 2017; 42, pp. 1-13. [DOI: https://dx.doi.org/10.1016/j.media.2017.06.015]
46. Lo, S.B.; Freedman, M.T.; Gillis, L.B.; White, C.S.; Mun, S.K. Journal Club: Computer-Aided Detection of Lung Nodules on CT with a Computerized Pulmonary Vessel Suppressed Function. AJR Am. J. Roentgenol.; 2018; 210, pp. 480-488. [DOI: https://dx.doi.org/10.2214/ajr.17.18718]
47. Wagner, A.-K.; Hapich, A.; Psychogios, M.N.; Teichgräber, U.; Malich, A.; Papageorgiou, I. Computer-Aided Detection of Pulmonary Nodules in Computed Tomography Using ClearReadCT. J. Med. Syst.; 2019; 43, 58. [DOI: https://dx.doi.org/10.1007/s10916-019-1180-1]
48. Scholten, E.T.; Jacobs, C.; van Ginneken, B.; van Riel, S.; Vliegenthart, R.; Oudkerk, M.; de Koning, H.J.; Horeweg, N.; Prokop, M.; Gietema, H.A. et al. Detection and quantification of the solid component in pulmonary subsolid nodules by semiautomatic segmentation. Eur. Radiol.; 2015; 25, pp. 488-496. [DOI: https://dx.doi.org/10.1007/s00330-014-3427-z]
49. Fischer, A.M.; Varga-Szemes, A.; Martin, S.S.; Sperl, J.I.; Sahbaee, P.; Neumann, D.M.; Gawlitza, J.; Henzler, T.; Johnson, C.M.B.; Nance, J.W. et al. Artificial Intelligence-based Fully Automated Per Lobe Segmentation and Emphysema-quantification Based on Chest Computed Tomography Compared with Global Initiative for Chronic Obstructive Lung Disease Severity of Smokers. J. Thorac. Imaging; 2020; 35, pp. S28-S34. [DOI: https://dx.doi.org/10.1097/RTI.0000000000000500]
50. Ng, C.K.C.; Leung, V.W.S.; Hung, R.H.M. Clinical Evaluation of Deep Learning and Atlas-Based Auto-Contouring for Head and Neck Radiation Therapy. Appl. Sci.; 2022; 12, 11681. [DOI: https://dx.doi.org/10.3390/app122211681]
51. Wang, J.; Chen, Z.; Yang, C.; Qu, B.; Ma, L.; Fan, W.; Zhou, Q.; Zheng, Q.; Xu, S. Evaluation Exploration of Atlas-Based and Deep Learning-Based Automatic Contouring for Nasopharyngeal Carcinoma. Front. Oncol.; 2022; 12, 833816. [DOI: https://dx.doi.org/10.3389/fonc.2022.833816]
52. Brunenberg, E.J.; Steinseifer, I.K.; Bosch, S.v.D.; Kaanders, J.H.; Brouwer, C.L.; Gooding, M.J.; van Elmpt, W.; Monshouwer, R. External validation of deep learning-based contouring of head and neck organs at risk. Phys. Imaging Radiat. Oncol.; 2020; 15, pp. 8-15. [DOI: https://dx.doi.org/10.1016/j.phro.2020.06.006] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33458320]
53. Chen, W.; Li, Y.; Dyer, B.A.; Feng, X.; Rao, S.; Benedict, S.H.; Chen, Q.; Rong, Y. Deep learning vs. atlas-based models for fast auto-segmentation of the masticatory muscles on head and neck CT images. Radiat. Oncol.; 2020; 15, 176. [DOI: https://dx.doi.org/10.1186/s13014-020-01617-0] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32690103]
54. Arora, A.; Arora, A. Generative adversarial networks and synthetic patient data: Current challenges and future perspectives. Futur. Health J.; 2022; 9, pp. 190-193. [DOI: https://dx.doi.org/10.7861/fhj.2022-0013] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35928184]
55. Ali, H.; Biswas, R.; Mohsen, F.; Shah, U.; Alamgir, A.; Mousa, O.; Shah, Z. The role of generative adversarial networks in brain MRI: A scoping review. Insights Into Imaging; 2022; 13, 98. [DOI: https://dx.doi.org/10.1186/s13244-022-01237-0] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35662369]
56. Laino, M.E.; Cancian, P.; Politi, L.S.; Della Porta, M.G.; Saba, L.; Savevski, V. Generative Adversarial Networks in Brain Imaging: A Narrative Review. J. Imaging; 2022; 8, 83. [DOI: https://dx.doi.org/10.3390/jimaging8040083] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35448210]
57. Ali, H.; Shah, Z. Combating COVID-19 Using Generative Adversarial Networks and Artificial Intelligence for Medical Images: Scoping Review. JMIR Med. Inform.; 2022; 10, e37365. [DOI: https://dx.doi.org/10.2196/37365]
58. Vey, B.L.; Gichoya, J.W.; Prater, A.; Hawkins, C.M. The Role of Generative Adversarial Networks in Radiation Reduction and Artifact Correction in Medical Imaging. J. Am. Coll. Radiol.; 2019; 16, pp. 1273-1278. [DOI: https://dx.doi.org/10.1016/j.jacr.2019.05.040]
59. Koshino, K.; Werner, R.A.; Pomper, M.G.; Bundschuh, R.A.; Toriumi, F.; Higuchi, T.; Rowe, S.P. Narrative review of generative adversarial networks in medical and molecular imaging. Ann. Transl. Med.; 2021; 9, 821. [DOI: https://dx.doi.org/10.21037/atm-20-6325]
60. Yi, X.; Walia, E.; Babyn, P. Generative adversarial network in medical imaging: A review. Med. Image Anal.; 2019; 58, 101552. [DOI: https://dx.doi.org/10.1016/j.media.2019.101552]
61. Apostolopoulos, I.D.; Papathanasiou, N.D.; Apostolopoulos, D.J.; Panayiotakis, G.S. Applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging: A review. Eur. J. Nucl. Med. Mol. Imaging; 2022; 49, pp. 3717-3739. [DOI: https://dx.doi.org/10.1007/s00259-022-05805-w]
62. Jeong, J.J.; Tariq, A.; Adejumo, T.; Trivedi, H.; Gichoya, J.W.; Banerjee, I. Systematic Review of Generative Adversarial Networks (GANs) for Medical Image Classification and Segmentation. J. Digit. Imaging; 2022; 35, pp. 137-152. [DOI: https://dx.doi.org/10.1007/s10278-021-00556-w]
63. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst.; 2014; 27, pp. 2672-2680. [DOI: https://dx.doi.org/10.48550/arXiv.1406.2661]
64. Sun, Z.; Ng, C.K.C. Artificial Intelligence (Enhanced Super-Resolution Generative Adversarial Network) for Calcium Deblooming in Coronary Computed Tomography Angiography: A Feasibility Study. Diagnostics; 2022; 12, 991. [DOI: https://dx.doi.org/10.3390/diagnostics12040991]
65. Sun, Z.; Ng, C.K.C. Finetuned Super-Resolution Generative Adversarial Network (Artificial Intelligence) Model for Calcium Deblooming in Coronary Computed Tomography Angiography. J. Pers. Med.; 2022; 12, 1354. [DOI: https://dx.doi.org/10.3390/jpm12091354] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36143139]
66. Al Mahrooqi, K.M.S.; Ng, C.K.C.; Sun, Z. Pediatric Computed Tomography Dose Optimization Strategies: A Literature Review. J. Med. Imaging Radiat. Sci.; 2015; 46, pp. 241-249. [DOI: https://dx.doi.org/10.1016/j.jmir.2015.03.003]
67. Davendralingam, N.; Sebire, N.J.; Arthurs, O.J.; Shelmerdine, S.C. Artificial intelligence in paediatric radiology: Future opportunities. Br. J. Radiol.; 2021; 94, 20200975. [DOI: https://dx.doi.org/10.1259/bjr.20200975] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32941736]
68. Tuysuzoglu, A.; Tan, J.; Eissa, K.; Kiraly, A.P.; Diallo, M.; Kamen, A. Deep Adversarial Context-Aware Landmark Detection for Ultrasound Imaging. Proceedings of the 21st International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2018); Granada, Spain, 16–20 September 2018; pp. 151-158. [DOI: https://dx.doi.org/10.1007/978-3-030-00937-3_18]
69. PRISMA: Transparent Reporting of Systematic Reviews and Meta-Analyses. Available online: http://www.prisma-statement.org/ (accessed on 26 June 2023).
70. Waffenschmidt, S.; Knelangen, M.; Sieben, W.; Bühn, S.; Pieper, D. Single screening versus conventional double screening for study selection in systematic reviews: A methodological systematic review. BMC Med. Res. Methodol.; 2019; 19, 132. [DOI: https://dx.doi.org/10.1186/s12874-019-0782-0]
71. Ng, C.K.C. A review of the impact of the COVID-19 pandemic on pre-registration medical radiation science education. Radiography; 2021; 28, pp. 222-231. [DOI: https://dx.doi.org/10.1016/j.radi.2021.07.026]
72. Xu, L.; Gao, J.; Wang, Q.; Yin, J.; Yu, P.; Bai, B.; Pei, R.; Chen, D.; Yang, G.; Wang, S. et al. Computer-Aided Diagnosis Systems in Diagnosing Malignant Thyroid Nodules on Ultrasonography: A Systematic Review and Meta-Analysis. Eur. Thyroid. J.; 2020; 9, pp. 186-193. [DOI: https://dx.doi.org/10.1159/000504390]
73. Aggarwal, R.; Sounderajah, V.; Martin, G.; Ting, D.S.W.; Karthikesalingam, A.; King, D.; Ashrafian, H.; Darzi, A. Diagnostic accuracy of deep learning in medical imaging: A systematic review and meta-analysis. NPJ Digit. Med.; 2021; 4, 65. [DOI: https://dx.doi.org/10.1038/s41746-021-00438-z]
74. Vasey, B.; Ursprung, S.; Beddoe, B.; Taylor, E.H.; Marlow, N.; Bilbro, N.; Watkinson, P.; McCulloch, P. Association of Clinician Diagnostic Performance with Machine Learning-Based Decision Support Systems: A Systematic Review. JAMA Netw. Open; 2021; 4, e211276. [DOI: https://dx.doi.org/10.1001/jamanetworkopen.2021.1276]
75. Imrey, P.B. Limitations of Meta-analyses of Studies with High Heterogeneity. JAMA Netw. Open; 2020; 3, e1919325. [DOI: https://dx.doi.org/10.1001/jamanetworkopen.2019.19325]
76. Sirriyeh, R.; Lawton, R.; Gardner, P.; Armitage, G. Reviewing studies with diverse designs: The development and evaluation of a new tool. J. Eval. Clin. Pract.; 2012; 18, pp. 746-752. [DOI: https://dx.doi.org/10.1111/j.1365-2753.2011.01662.x] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/21410846]
77. Devika, K.; Mahapatra, D.; Subramanian, R.; Oruganti, V.R.M. Outlier-Based Autism Detection Using Longitudinal Structural MRI. IEEE Access; 2022; 10, pp. 27794-27808. [DOI: https://dx.doi.org/10.1109/ACCESS.2022.3157613]
78. Kuttala, D.; Mahapatra, D.; Subramanian, R.; Oruganti, V.R.M. Dense attentive GAN-based one-class model for detection of autism and ADHD. J. King Saud Univ.-Comput. Inf. Sci.; 2022; 34, pp. 10444-10458. [DOI: https://dx.doi.org/10.1016/j.jksuci.2022.11.001]
79. Motamed, S.; Khalvati, F. Inception-GAN for Semi-supervised Detection of Pneumonia in Chest X-rays. Proceedings of the 43rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2021); Jalisco, Mexico, 1–5 November 2021; pp. 3774-3778. [DOI: https://dx.doi.org/10.1109/embc46164.2021.9630473]
80. Dittimi, T.V.; Suen, C.Y. Single Image Super-Resolution for Medical Image Applications. Proceedings of the International Conference on Pattern Recognition and Artificial Intelligence 2020 (ICPRAI 2020); Zhongshan, China, 19–23 October 2020; pp. 660-666. [DOI: https://dx.doi.org/10.1007/978-3-030-59830-3_57]
81. Fu, Y.; Dong, S.; Liao, Y.; Xue, L.; Xu, Y.; Li, F.; Yang, Q.; Yu, T.; Tian, M.; Zhuo, C. A Resource-Efficient Deep Learning Framework for Low-Dose Brain Pet Image Reconstruction and Analysis. Proceedings of the 19th IEEE International Symposium on Biomedical Imaging (ISBI 2022); Kolkata, India, 28–31 March 2022; pp. 1-5. [DOI: https://dx.doi.org/10.1109/isbi52829.2022.9761617]
82. Park, H.S.; Jeon, K.; Lee, J.; You, S.K. Denoising of pediatric low dose abdominal CT using deep learning based algorithm. PLoS ONE; 2022; 17, e0260369. [DOI: https://dx.doi.org/10.1371/journal.pone.0260369]
83. Pham, C.-H.; Tor-Diez, C.; Meunier, H.; Bednarek, N.; Fablet, R.; Passat, N.; Rousseau, F. Simultaneous Super-Resolution and Segmentation Using a Generative Adversarial Network: Application to Neonatal Brain MRI. Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019); Venice, Italy, 8–11 April 2019; pp. 991-994. [DOI: https://dx.doi.org/10.1109/isbi.2019.8759255]
84. Decourt, C.; Duong, L. Semi-supervised generative adversarial networks for the segmentation of the left ventricle in pediatric MRI. Comput. Biol. Med.; 2020; 123, 103884. [DOI: https://dx.doi.org/10.1016/j.compbiomed.2020.103884]
85. Guo, L.; Hu, Y.; Lei, B.; Du, J.; Mao, M.; Jin, Z.; Xia, B.; Wang, T. Dual Network Generative Adversarial Networks for Pediatric Echocardiography Segmentation. Proceedings of the 22nd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2019); Shenzhen, China, 13–17 October 2019; pp. 113-122. [DOI: https://dx.doi.org/10.1007/978-3-030-32875-7_13]
86. Kan, C.N.E.; Gilat-Schmidt, T.; Ye, D.H. Enhancing reproductive organ segmentation in pediatric CT via adversarial learning. Proceedings of the International Society for Optics and Photonics Medical Imaging 2021 (SPIE Medical Imaging 2021); San Diego, CA, USA, 15–20 February 2021; pp. 1-6. [DOI: https://dx.doi.org/10.1117/12.2582127]
87. Karimi-Bidhendi, S.; Arafati, A.; Cheng, A.L.; Wu, Y.; Kheradvar, A.; Jafarkhani, H. Fully-automated deep-learning segmentation of pediatric cardiovascular magnetic resonance of patients with complex congenital heart diseases. J. Cardiovasc. Magn. Reson.; 2020; 22, 80. [DOI: https://dx.doi.org/10.1186/s12968-020-00678-0]
88. Zhou, Y.; Rakkunedeth, A.; Keen, C.; Knight, J.; Jaremko, J.L. Wrist Ultrasound Segmentation by Deep Learning. Proceedings of the 20th International Conference on Artificial Intelligence in Medicine (AIME 2022); Halifax, NS, Canada, 14–17 June 2022; pp. 230-237. [DOI: https://dx.doi.org/10.1007/978-3-031-09342-5_22]
89. Banerjee, T.; Batta, D.; Jain, A.; Karthikeyan, S.; Mehndiratta, H.; Kishan, K.H. Deep Belief Convolutional Neural Network with Artificial Image Creation by GANs Based Diagnosis of Pneumonia in Radiological Samples of the Pectoralis Major. Proceedings of the 8th International Conference on Electrical and Electronics Engineering (ICEEE 2021); New Delhi, India, 9–11 April 2021; pp. 979-1002. [DOI: https://dx.doi.org/10.1007/978-981-16-0749-3_75]
90. Diller, G.-P.; Vahle, J.; Radke, R.; Vidal, M.L.B.; Fischer, A.J.; Bauer, U.M.M.; Sarikouch, S.; Berger, F.; Beerbaum, P.; Baumgartner, H. et al. Utility of deep learning networks for the generation of artificial cardiac magnetic resonance images in congenital heart disease. BMC Med. Imaging; 2020; 20, 113. [DOI: https://dx.doi.org/10.1186/s12880-020-00511-1]
91. Guo, Z.; Zheng, L.; Ye, L.; Pan, S.; Yan, T. Data Augmentation Using Auxiliary Classifier Generative Adversarial Networks. Proceedings of the 17th Chinese Intelligent Systems Conference; Fuzhou, China, 16–17 October 2021; pp. 790-800. [DOI: https://dx.doi.org/10.1007/978-981-16-6328-4_79]
92. Guo, Z.-Z.; Zheng, L.-X.; Huang, D.-T.; Yan, T.; Su, Q.-L. RS-FFGAN: Generative adversarial network based on real sample feature fusion for pediatric CXR image data enhancement. J. Radiat. Res. Appl. Sci.; 2022; 15, 100461. [DOI: https://dx.doi.org/10.1016/j.jrras.2022.100461]
93. Kan, C.N.E.; Maheenaboobacker, N.; Ye, D.H. Age-Conditioned Synthesis of Pediatric Computed Tomography with Auxiliary Classifier Generative Adversarial Networks. Proceedings of the 17th IEEE International Symposium on Biomedical Imaging (ISBI 2020); Iowa City, IA, USA, 3–7 April 2020; pp. 109-112. [DOI: https://dx.doi.org/10.1109/isbi45749.2020.9098623]
94. Khalifa, N.E.M.; Taha, M.H.N.; Hassanien, A.E.; Elghamrawy, S. Detection of Coronavirus (COVID-19) Associated Pneumonia Based on Generative Adversarial Networks and a Fine-Tuned Deep Transfer Learning Model Using Chest X-ray Dataset. Proceedings of the 8th International Conference on Advanced Intelligent Systems and Informatics 2022 (AISI 2022); Cairo, Egypt, 20–22 November 2022; pp. 234-247. [DOI: https://dx.doi.org/10.1007/978-3-031-20601-6_22]
95. Venu, S.K. Improving the Generalization of Deep Learning Classification Models in Medical Imaging Using Transfer Learning and Generative Adversarial Networks. Proceedings of the 13th International Conference on Agents and Artificial Intelligence (ICAART 2021); Setúbal, Portugal, 4–6 February 2021; pp. 218-235. [DOI: https://dx.doi.org/10.1007/978-3-031-10161-8_12]
96. Li, X.; Ke, Y. Privacy Preserving and Communication Efficient Information Enhancement for Imbalanced Medical Image Classification. Proceedings of the Medical Image Understanding and Analysis—26th Annual Conference (MIUA 2022); Cambridge, UK, 27–29 July 2022; pp. 663-679. [DOI: https://dx.doi.org/10.1007/978-3-031-12053-4_49]
97. Prince, E.W.; Whelan, R.; Mirsky, D.M.; Stence, N.; Staulcup, S.; Klimo, P.; Anderson, R.C.E.; Niazi, T.N.; Grant, G.; Souweidane, M. et al. Robust deep learning classification of adamantinomatous craniopharyngioma from limited preoperative radiographic images. Sci. Rep.; 2020; 10, 16885. [DOI: https://dx.doi.org/10.1038/s41598-020-73278-8]
98. Su, L.; Fu, X.; Hu, Q. Generative adversarial network based data augmentation and gender-last training strategy with application to bone age assessment. Comput. Methods Programs Biomed.; 2021; 212, 106456. [DOI: https://dx.doi.org/10.1016/j.cmpb.2021.106456] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34656013]
99. Szepesi, P.; Szilágyi, L. Detection of pneumonia using convolutional neural networks and deep learning. Biocybern. Biomed. Eng.; 2022; 42, pp. 1012-1022. [DOI: https://dx.doi.org/10.1016/j.bbe.2022.08.001]
100. Vetrimani, E.; Arulselvi, M.; Ramesh, G. Building convolutional neural network parameters using genetic algorithm for the croup cough classification problem. Meas. Sens.; 2023; 27, 100717. [DOI: https://dx.doi.org/10.1016/j.measen.2023.100717]
101. Chen, J.; Sun, Y.; Fang, Z.; Lin, W.; Li, G.; Wang, L. UNC UMN Baby Connectome Project Consortium. Harmonized neonatal brain MR image segmentation model for cross-site datasets. Biomed. Signal Process. Control; 2021; 69, 102810. [DOI: https://dx.doi.org/10.1016/j.bspc.2021.102810] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35967125]
102. Hržić, F.; Žužić, I.; Tschauner, S.; Štajduhar, I. Cast suppression in radiographs by generative adversarial networks. J. Am. Med. Inform. Assoc.; 2021; 28, pp. 2687-2694. [DOI: https://dx.doi.org/10.1093/jamia/ocab192]
103. Kaplan, S.; Perrone, A.; Alexopoulos, D.; Kenley, J.K.; Barch, D.M.; Buss, C.; Elison, J.T.; Graham, A.M.; Neil, J.J.; O’Connor, T.G. et al. Synthesizing pseudo-T2w images to recapture missing data in neonatal neuroimaging with applications in rs-fMRI. Neuroimage; 2022; 253, 119091. [DOI: https://dx.doi.org/10.1016/j.neuroimage.2022.119091]
104. Khalili, N.; Turk, E.; Zreik, M.; Viergever, M.A.; Benders, M.J.N.L.; Išgum, I. Generative Adversarial Network for Segmentation of Motion Affected Neonatal Brain MRI. Proceedings of the 22nd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2019); Shenzhen, China, 13–17 October 2019; pp. 320-328. [DOI: https://dx.doi.org/10.1007/978-3-030-32248-9_36]
105. Maspero, M.; Bentvelzen, L.G.; Savenije, M.H.; Guerreiro, F.; Seravalli, E.; Janssens, G.O.; Berg, C.A.v.D.; Philippens, M.E. Deep learning-based synthetic CT generation for paediatric brain MR-only photon and proton radiotherapy. Radiother. Oncol.; 2020; 153, pp. 197-204. [DOI: https://dx.doi.org/10.1016/j.radonc.2020.09.029]
106. Peng, L.; Lin, L.; Lin, Y.; Zhang, Y.; Vlasova, R.M.; Prieto, J.; Chen, Y.-W.; Gerig, G.; Styner, M. Multi-modal Perceptual Adversarial Learning for Longitudinal Prediction of Infant MR Images. Proceedings of the First International Workshop on Advances in Simplifying Medical UltraSound (ASMUS 2020) and the 5th International Workshop on Perinatal, Preterm and Paediatric Image Analysis (PIPPI 2020); Lima, Peru, 4–8 October 2020; pp. 284-294. [DOI: https://dx.doi.org/10.1007/978-3-030-60334-2_28]
107. Tang, Y.; Tang, Y.; Sandfort, V.; Xiao, J.; Summers, R.M. TUNA-Net: Task-Oriented Unsupervised Adversarial Network for Disease Recognition in Cross-domain Chest X-rays. Proceedings of the 22nd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2019); Shenzhen, China, 13–17 October 2019; pp. 431-440. [DOI: https://dx.doi.org/10.1007/978-3-030-32226-7_48]
108. Tor-Diez, C.; Porras, A.R.; Packer, R.J.; Avery, R.A.; Linguraru, M.G. Unsupervised MRI Homogenization: Application to Pediatric Anterior Visual Pathway Segmentation. Proceedings of the 11th International Workshop on Machine Learning in Medical Imaging (MLMI 2020); Lima, Peru, 4 October 2020; pp. 180-188. [DOI: https://dx.doi.org/10.1007/978-3-030-59861-7_19]
109. Wang, C.; Uh, J.; Merchant, D.T.E.; Hua, C.-H.; Acharya, S. Facilitating MR-Guided Adaptive Proton Therapy in Children Using Deep Learning-Based Synthetic CT. Int. J. Part. Ther.; 2021; 8, pp. 11-20. [DOI: https://dx.doi.org/10.14338/IJPT-20-00099.1]
110. Wang, C.; Uh, J.; He, X.; Hua, C.-H.; Sahaja, A. Transfer learning-based synthetic CT generation for MR-only proton therapy planning in children with pelvic sarcomas. Proceedings of the Medical Imaging 2021: Physics of Medical Imaging; San Diego, CA, USA, 15–20 February 2021; [DOI: https://dx.doi.org/10.1117/12.2579767]
111. Wang, C.; Uh, J.; Patni, T.; Do, T.M.; Li, Y.; Hua, C.; Acharya, S. Toward MR-only proton therapy planning for pediatric brain tumors: Synthesis of relative proton stopping power images with multiple sequence MRI and development of an online quality assurance tool. Med. Phys.; 2022; 49, pp. 1559-1570. [DOI: https://dx.doi.org/10.1002/mp.15479]
112. Zhao, F.; Wu, Z.; Wang, L.; Lin, W.; Xia, S.; Shen, D.; Li, G. UNC/UMN Baby Connectome Project Consortium. Harmonization of infant cortical thickness using surface-to-surface cycle-consistent adversarial networks. Proceedings of the 22nd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2019); Shenzhen, China, 13–17 October 2019; pp. 475-483. [DOI: https://dx.doi.org/10.1007/978-3-030-32251-9_52]
113. Mostapha, M.; Prieto, J.; Murphy, V.; Girault, J.; Foster, M.; Rumple, A.; Blocher, J.; Lin, W.; Elison, J.; Gilmore, J. et al. Semi-supervised VAE-GAN for Out-of-Sample Detection Applied to MRI Quality Control. Proceedings of the 22nd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2019); Shenzhen, China, 13–17 October 2019; pp. 127-136. [DOI: https://dx.doi.org/10.1007/978-3-030-32248-9_15]
114. Pediatric X-ray Imaging. Available online: https://www.fda.gov/radiation-emitting-products/medical-imaging/pediatric-x-ray-imaging (accessed on 20 July 2023).
115. Kozak, B.M.; Jaimes, C.; Kirsch, J.; Gee, M.S. MRI Techniques to Decrease Imaging Times in Children. RadioGraphics; 2020; 40, pp. 485-502. [DOI: https://dx.doi.org/10.1148/rg.2020190112]
116. Li, X.; Cokkinos, D.; Gadani, S.; Rafailidis, V.; Aschwanden, M.; Levitin, A.; Szaflarski, D.; Kirksey, L.; Staub, D.; Partovi, S. Advanced ultrasound techniques in arterial diseases. Int. J. Cardiovasc. Imaging; 2022; 38, pp. 1711-1721. [DOI: https://dx.doi.org/10.1007/s10554-022-02558-3]
117. Chaudhari, A.S.; Mittra, E.; Davidzon, G.A.; Gulaka, P.; Gandhi, H.; Brown, A.; Zhang, T.; Srinivas, S.; Gong, E.; Zaharchuk, G. et al. Low-count whole-body PET with deep learning in a multicenter and externally validated study. NPJ Digit. Med.; 2021; 4, 127. [DOI: https://dx.doi.org/10.1038/s41746-021-00497-2]
118. Liang, G.; Zheng, L. A transfer learning method with deep residual network for pediatric pneumonia diagnosis. Comput. Methods Programs Biomed.; 2020; 187, 104964. [DOI: https://dx.doi.org/10.1016/j.cmpb.2019.06.023]
119. Behzadi-Khormouji, H.; Rostami, H.; Salehi, S.; Derakhshande-Rishehri, T.; Masoumi, M.; Salemi, S.; Keshavarz, A.; Gholamrezanezhad, A.; Assadi, M.; Batouli, A. Deep learning, reusable and problem-based architectures for detection of consolidation on chest X-ray images. Comput. Methods Programs Biomed.; 2020; 185, 105162. [DOI: https://dx.doi.org/10.1016/j.cmpb.2019.105162]
120. Liu, X.; Faes, L.; Kale, A.U.; Wagner, S.K.; Fu, D.J.; Bruynseels, A.; Mahendiran, T.; Moraes, G.; Shamdas, M.; Kern, C. et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: A systematic review and meta-analysis. Lancet Digit. Health; 2019; 1, pp. e271-e297. [DOI: https://dx.doi.org/10.1016/S2589-7500(19)30123-2] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33323251]
121. Sun, Z.; Ng, C.K.C.; Dos Reis, C.S. Synchrotron radiation computed tomography versus conventional computed tomography for assessment of four types of stent grafts used for endovascular treatment of thoracic and abdominal aortic aneurysms. Quant. Imaging Med. Surg.; 2018; 8, pp. 609-620. [DOI: https://dx.doi.org/10.21037/qims.2018.07.05] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30140623]
122. Almutairi, A.M.; Sun, Z.; Ng, C.; Al-Safran, Z.A.; Al-Mulla, A.A.; Al-Jamaan, A.I. Optimal scanning protocols of 64-slice CT angiography in coronary artery stents: An in vitro phantom study. Eur. J. Radiol.; 2010; 74, pp. 156-160. [DOI: https://dx.doi.org/10.1016/j.ejrad.2009.01.027]
123. Sun, Z.; Ng, C.K.C. Use of Synchrotron Radiation to Accurately Assess Cross-Sectional Area Reduction of the Aortic Branch Ostia Caused by Suprarenal Stent Wires. J. Endovasc. Ther.; 2017; 24, pp. 870-879. [DOI: https://dx.doi.org/10.1177/1526602817732315] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28922970]
124. Harris, M.; Qi, A.; Jeagal, L.; Torabi, N.; Menzies, D.; Korobitsyn, A.; Pai, M.; Nathavitharana, R.R.; Khan, F.A. A systematic review of the diagnostic accuracy of artificial intelligence-based computer programs to analyze chest X-rays for pulmonary tuberculosis. PLoS ONE; 2019; 14, e0221339. [DOI: https://dx.doi.org/10.1371/journal.pone.0221339]
125. Groen, A.M.; Kraan, R.; Amirkhan, S.F.; Daams, J.G.; Maas, M. A systematic review on the use of explainability in deep learning systems for computer aided diagnosis in radiology: Limited use of explainable AI?. Eur. J. Radiol.; 2022; 157, 110592. [DOI: https://dx.doi.org/10.1016/j.ejrad.2022.110592] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36371947]
126. Zebari, D.A.; Ibrahim, D.A.; Zeebaree, D.Q.; Haron, H.; Salih, M.S.; Damaševičius, R.; Mohammed, M.A. Systematic Review of Computing Approaches for Breast Cancer Detection Based Computer Aided Diagnosis Using Mammogram Images. Appl. Artif. Intell.; 2021; 35, pp. 2157-2203. [DOI: https://dx.doi.org/10.1080/08839514.2021.2001177]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Generative artificial intelligence, especially with regard to the generative adversarial network (GAN), is an important research area in radiology as evidenced by a number of literature reviews on the role of GAN in radiology published in the last few years. However, no review article about GAN in pediatric radiology has been published yet. The purpose of this paper is to systematically review applications of GAN in pediatric radiology, their performances, and methods for their performance evaluation. Electronic databases were used for a literature search on 6 April 2023. Thirty-seven papers met the selection criteria and were included. This review reveals that the GAN can be applied to magnetic resonance imaging, X-ray, computed tomography, ultrasound and positron emission tomography for image translation, segmentation, reconstruction, quality assessment, synthesis and data augmentation, and disease diagnosis. About 80% of the included studies compared their GAN model performances with those of other approaches and indicated that their GAN models outperformed the others by 0.1–158.6%. However, these study findings should be used with caution because of a number of methodological weaknesses. For future GAN studies, more robust methods will be essential for addressing these issues. Otherwise, this would affect the clinical adoption of the GAN-based applications in pediatric radiology and the potential advantages of GAN could not be realized widely.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details

1 Curtin Medical School, Curtin University, GPO Box U1987, Perth, WA 6845, Australia;