Introduction
Artificial intelligence (AI) has rapidly found use in medical imaging [1, 2]. Through the development and validation of AI-based algorithms, new commercially available software systems have emerged, such as those for chest radiographs, mammography, or bone age radiographs [3–7]. Apart from validating the performance of each AI-based solution, recent efforts have drawn attention for proving the clinical efficacy of AI in actual medical processes [1, 8, 9].
Several studies have reported positive feedback from radiologists and residents concerning the adaptation of AI in clinical practice [10–14]. They found that most radiologists agreed that more research and application of AI are necessary. Other surveys have also highlighted positive attitudes towards AI use among clinicians [15, 16]. However, unlike the grand expectations made with the first introduction of AI, the number of radiologists truly using AI for daily imaging interpretations is small [13]. Therefore, it is important to understand what doctors really experience when incorporating AI-based methods to interpret radiographs in order to determine the future direction of AI use in medicine.
Recently, AI-based lesion detection algorithms were introduced for chest radiographs and approved due to acceptable diagnostic performance shown for various diseases, such as pneumonia, tuberculosis, and lung nodules [7, 17–21]. However, approval of clinical application of commercially available software remains limited [2, 8]. A recent consensus among thoracic radiologists on the utilization of AI-based medical devices found that AI could assist the interpretation work of radiologists and support the decision-making process of clinicians when radiologists are not available [4]. They concluded that the effective and convenient placement of AI-based devices in clinical environments is essential to maximize the merits of AI in medicine [4]. As our hospital has utilized an AI-based device for all chest radiographs since March 2020 [20, 22], we wanted to know what doctors thought about the actual integration of AI in daily practice. Therefore, we performed a hospital-wide survey to document how the AI-based device has impacted the clinical process and what impressions clinicians and radiologists formed after utilizing it in real situations.
The purpose of this study was to understand the effect of clinically applying AI regularly to daily chest radiographs through a hospital-wide survey of clinicians and radiologists.
Materials and methods
Subjects
The Institutional Review Board (IRB) of Yongin Severance Hospital approved this prospective study (IRB number 9-2021-0073). All doctors in our hospital received an e-mail containing a link for the online survey in July 2021, and the study participants included those who freely decided to take the survey anonymously within 2 weeks after receiving the e-mail. Sample size calculation or sampling was not performed in this survey study, and we included doctors who were willing to participate in this survey autonomously. Among the participants, written informed consent was obtained before they began to answer the 25 questions of the online survey autonomously under the guidance of the IRB of our hospital. The online surveys were uploaded in a web-based format (SurveyMonkey.com). All study methods were in accordance to the Consensus-based Checklist for Reporting of Survey Studies guidelines. All participants were either clinicians or radiologists who worked in our hospital during 2020 or 2021. The complete question forms are provided in S1 File. Participants answered questions concerning their clinical experience with AI-based lesion detection software for chest radiographs. Questions were designed to gather information on basic demographics, experience with AI, actual individual utilization status of AI, and preferences and attitudes toward AI-based software after actual usage. The questionnaires were composed of a single choice, multiple choices, and scale bar questions, and there were no open-ended questions. The composition of questionnaires and way of presenting answers were constructed and validated under the guidance of a statistical expert to analysis results in an objective way as much as possible and to overcome the radical subjectivity of a survey study.
Use of AI-based lesion detection software for chest radiographs
In our general hospital, commercially available AI-based lesion detection software (Lunit INSIGHT CXR, versions 2 and 3, Lunit Inc., Korea.) has been run on all chest radiographs from patients over 18 years old since March 2020. This ResNet34-based software was developed and approved for adult chest radiographs with the anteroposterior and posteroanterior view [23]. Detailed information about the integration process of AI for chest radiographs was well introduced in a recent study [20]. In our hospital, version 2 of the abovementioned software was utilized from March 2020 to February 2021 and could detect three types of lesions (nodule, consolidation, and pneumothorax) (Fig 1A). From March 2021, version 3 was utilized for chest radiographs by detecting nine types of lesions (nodule, consolidation, pneumothorax, pneumoperitoneum, fibrosis, atelectasis, cardiomegaly, calcification, and pleural effusion). This was an upgraded version including three types of the lesions detected in version 2, and these two versions were not from different software. Under the guidance of the radiology department of our hospital, version 2 was used from March 2020 to February 2021, and version 3 replaced version 2 from March 2021 and was used for all chest radiographs since then. Therefore, the users of version 2 and 3 were not different. The abbreviations and abnormality scores of each lesion are displayed with an additional grayscale heatmap at the lesion location in version 3 (Fig 1B).
[Figure omitted. See PDF.]
Images are results analyzed with (A) version 2 and (B) version 3 of the AI-based lesion detection software. (A) Version 2 can detect and display three types of lesions (consolidation, nodule, and pneumothorax) with a color heatmap and total abnormality score. (B) Version 3 can detect and display nine types of lesions (six additional types of the lesion in addition to the three lesions detected in version 2) with a grayscale heatmap and abnormality score for each lesion. Note the right pleural effusion that was additionally detected and displayed with version 3 of the software.
The workflow for utilizing the AI system in our picture archiving and communication system (PACS) is shown in Fig 2. As soon as the chest radiographs were verified by the radiographers, the images were automatically sent to the AI processing server. PACS automatically retrieved the AI results and made them accessible on the radiologists’ or clinicians’ workstations. PACS viewer software (Zetta PACS, Taeyoung soft Co. Ltd., Korea) presented the total abnormality score on a worklist with several display options for the results. Contour maps were attached as separate captured images following the original radiographs. An abnormality score of 0.15 was the cutoff for visualization on the contour maps for each lesion according to the vendors’ guidelines and other studies [23–26]. Through this process, doctors could refer to the analyzed images simply by scrolling down from the original radiographs whenever they wished to refer to the AI results. Therefore, the participants of this survey answered questions on their own experience using AI-based software in daily practice from March 2020.
[Figure omitted. See PDF.]
Statistical analysis
SAS software version 9.4 (SAS Institute Inc., Cary, NC, USA) was used for the statistical analysis. Data are presented as means with standard deviations and medians with interquartile ranges (Q1, Q3). The two-sample t-test or Fisher’s exact test was used for group comparison. The paired t-test was used for comparisons within groups, and the Wilcoxon rank-sum test was used for comparison between groups. P-values less than 0.05 were considered as statistically significant.
Results
Demographics
Among 194 doctors in our hospital, a total of 123 doctors (63.4%) answered the survey, and 91 (clinicians: radiologists = 78:13) completed it in full. The mean proportion of completed questions among all given questions was 79%, and the survey took an average 7 minutes, 48 seconds to complete. Basic participant demographics are summarized in Table 1. Most doctors of our hospital were board-certified staff because our general hospital newly opened in March 2020. Except for several interns and residents in family medicine, there were no trainees in most departments, including radiology, during the study period. Thus, board-certified doctors accounted for 83.7% of the survey participants, and the remaining 16.3% were residents or interns. The subspecialties of the participants are displayed in Fig 3. Among the radiologists, there was only one thoracic radiologist. However, in our hospital, all chest radiographs are interpreted by board-certified radiologists regardless of their subspecialty, because the number of chest radiographs that need to be interpreted are high. Radiologists are expected to read a minimum of 500 radiographs every month. The mean number of adult chest radiographs obtained per month from March 2020 to July 2021 was 6,849. The proportion of doctors who had previous experience with AI-based education was significantly smaller among clinicians than radiologists (27.6% vs. 93.3%, p<0.001). The proportion of doctors who had experience with AI-based research was also smaller among clinicians (21% vs. 46.7%, p = 0.049). When asked about overall personal experience with AI (Table 2), clinicians and radiologists showed significant increases in experience after March 2020, compared to before (all, p<0.005), and this increase was more pronounced among radiologists (38% vs. 54.6%, p = 0.01).
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
Utilization of AI for chest radiographs
As shown in Table 2, the proportions of chest radiographs among all utilized imaging studies in daily practice were 45.2% for clinicians and 18.8% for radiologists (p = 0.003). However, the proportion of chest radiographs for which doctors utilized AI results in a day was significantly higher among radiologists than clinicians (45.9% vs. 82.5%, p = 0.008). When participants were asked to pick the location where they thought the AI results were put to best use, they answered the emergency room (ER), outpatient unit, inpatient unit, and intensive care unit (ICU) in descending order. When radiologists were asked to choose the most useful finding among the nine types of lesions assessed by AI, they answered pneumothorax, nodules, consolidation, atelectasis, pneumoperitoneum, pleural effusion, cardiomegaly, fibrosis, and calcification in descending order (Fig 4). When asked to choose between versions 2 and 3, 83.3% of clinicians and 84.6% of radiologists preferred version 3 of the AI-based software. However, on the method of display, clinicians (59%) preferred the color heatmap, while radiologists (69%) preferred the grayscale heatmap.
[Figure omitted. See PDF.]
Multiple-choice questions assessing AI experience
Participants were allowed to choose multiple answers for each question (Table 3). When asked why they referred to AI results, 74.7% of the participants answered that they referred to AI results with the belief that it would help lower the risk of missed diagnoses. About 35.2% of the participants answered it was because it was easy to refer to the AI results on PACS, 27.5% said it had become routine to check the AI results, and 19.8% said they referred to the AI results for their relative accuracy.
[Figure omitted. See PDF.]
When surveying to which results the participants mainly referred, 69.2% of the participants answered the location of the lesion that was displayed on chest radiographs, 49.5% selected the total abnormality score of each radiograph, 46.2% chose per lesion abbreviations, and 25.3% selected per lesion abnormality score. When asked for what each regarded as the most useful advantage of incorporating AI, 65.9% selected lesion detection, 59.3% chose discrimination of normal and abnormal radiographs, 47.3% chose quicker decision making, 13.2% selected differential diagnosis of lesions, and 6.6% selected triage of reading.
When asked for which function or information they wanted to be developed for AI in the future, 59.3% of the participants chose a comparison function that would automatically compare images to previous radiographs, such as the comparison of pleural effusion or pneumothorax. About 53.8% selected increased diagnostic accuracy for lesion detection and also an alarm system for urgent lesions on radiographs. About 30.8% chose expanded use to different types of radiographs other than chest imaging, 18.7% selected broadening the applicable age to patients less than 18 years old, and 17.6% selected increasing the types of lesions detectable on chest radiographs.
Scale-bar questions assessing AI experience
We asked to subjectively choose percentages from 0 to 100% or -50 to +50% using the sliding bar function on the website to compare perceptions of AI before and after use thereof (Table 4). As many as 21% of clinicians and 16% of radiologists said that they had changed their own reading results after referring to the AI results (p = 0.727). Clinicians and radiologists said their trust levels for AI were about 64.9% and 66.5%, respectively (p = 0.759).
[Figure omitted. See PDF.]
We used the -50 to +50% scales when the answers could be binary, such as a negative or positive attitude, or decreased or increased results according to the participants’ perceptions about AI usage. When asked how the AI results would affect the reading times of chest radiographs, clinicians and radiologists said that they thought AI helped to reduce reading times and the number of reading requests for chest radiographs. In comparison of answers between clinicians and radiologists, the only significant difference was observed in the question concerning reading requests. Radiologists thought that reading requests for chest radiographs were reduced more than indicated by clinicians (-23.5% vs. -12.2%, p = 0.026). In addition, participants thought that diagnostic accuracy would increase after using AI, and they were more positive about AI-based medical devices after using them in clinical practice. Both clinicians and radiologists felt positive about the future usage of AI.
Comparison of staff and trainees among clinicians
The overall answers of trainees among clinicians are summarized in S2 File. There were no significant differences in perceptions of AI-based devices between staff and trainees among clinicians.
Discussion
Our study showed that about 46% of clinicians and 83% of radiologists referred to the analyzed results of the AI-based software in daily clinical practice for chest radiographs. The most common reason for using the software was to reduce missed diagnoses, and the second reason was because the software made it easy to utilize the AI results on PACS. Interestingly, 28% of participants answered that referring to the AI results had become routine during their readings of chest radiographs. This shows that presenting analyzed results with an efficient and user-friendly interface is critical for the successful adaptation of AI into the clinical process. It is notable that many doctors now routinely refer to AI in their everyday workflow as this gives us a glimpse of what full adaptation of AI can mean for radiology in the future.
The most useful location for utilizing the AI results was the ER and the outpatient unit. This indicates that the AI-based software was more useful in locations that require urgent decisions to be made that cannot wait a radiologist’s reading. Among the various lesion types, urgent or important lesions were thought to reap the benefits of AI results, with pneumothorax, nodule, and consolidation in decreasing order, even though the pneumoperitoneum was ranked lower. This result may be influenced by the incidence of detected lesion types to depict the most useful finding by the participants. Our results suggest that AI-based software can be adapted to chest radiographs effectively. This is important because chest radiographs are still one of the first imaging tools used to guide future treatment and that AI can help depict urgent conditions on chest radiographs even in situations where it is difficult to get a reading right away from a radiologist who is already dealing with a large number of daily images [27]. This means AI can help clinicians and radiologists to catch important diseases and can be adapted for critical diseases first and more effectively [27].
Concerning their experience with AI and the buildup of trust in the results from AI, clinicians and radiologists said they had changed their own reading results after referring to AI, with these changes thought to occur in 21% and 16% of cases, respectively. Doctors rated the trust levels for AI results as 64.9% and 66.5% for clinicians and radiologists, respectively. The AI-based software had a tendency to reduce reading times and the number of reading requests for chest radiographs. Concerning the perception for AI between radiologists and clinicians, radiologists thought reading requests for chest radiographs were reduced more than clinicians. This would be because compared to the various number of clinicians who give reading requests, a limited number of radiologists had to read chest radiographs for the entire hospital. This could make radiologists feel the effect of AI more on reducing reading requests after integrating AI. In addition, participants answered that using the software increased diagnostic accuracy and that they regarded AI in a more positive light after actual usage. They were optimistic about the future usage of AI. Another interesting thing to note is that doctors seem to have truly accepted the adaptation of AI-based software for daily chest radiographs based on the overall positive feedback collected through the survey responses. This study was meaningful because it gives a broad picture of the actual clinical effect of AI and how it is perceived through the eyes of a relatively large number of doctors from an entire hospital.
In a recent survey of trainees and fellows including radiologists from two nations, 60% of the participants answered that AI would impact clinical reality in less than 5 years, especially for screening disease and reducing the time needed for monotonous work [15]. However, 80.9% of clinicians answered in the same survey that they did not have any actual experience with AI despite these high expectations [15]. Still, AI-based methods have been developed and validated for various diseases, but mostly in a retrospective manner, and there are not many conclusions based on external validation or genuine clinical use [3, 8, 19, 28–30]. In another large survey in Europe, the degree of AI-based knowledge inversely affected fear about AI and affected perceptions about AI [10]. Another survey by the same research group showed that less knowledge and ethical issues could interrupt the wide adaptation of AI in clinical practice [12]. Despite implementation of AI being in its beginning stages, most studies showed a positive attitude toward AI in radiology and acceptance of its inevitable adaptation to medical imaging [11, 15, 23]. In a recent consensus statement, expert chest radiologists concluded that AI-based medical devices could help clinicians make decisions when radiologists are not promptly available and that AI could act as an assistant for radiologists [4]. Creating an effective clinical environment is a key factor for the successful adaptation of AI-based devices in medicine [4, 20, 22, 31]. Our study is meaningful because it demonstrates how commercially available AI-based software has actually been implemented in clinical practice and integrated successfully from an entire hospital-wide perspective. Chest radiographs are commonly performed and utilized for all subspecialties. Therefore, integration of AI on chest radiographs could broadly impact clinical practices [24, 26, 32–34], and our survey demonstrated how doctors are affected after AI is integrated into the daily imaging process in March 2020. In addition, our study showed that the clinicians and radiologists of our institution regarded AI more favorably after working with the AI-based software.
Our study has several limitations. First, according to our hospital’s characteristics, we could not compare actual experience before and after the adaptation of AI because our hospital adopted this software from its first opening. In addition, we could not include radiology trainees because most departments of our hospital did not have them at that time and doctors of relatively young age consisted a large proportion of the whole medical team (about 59.4% of participants were in their twenties or thirties), compared to other hospitals. The responses to the survey questions were based on each doctor’s personal experience before and after working at our hospital. To obtain qualified and objective results, we asked the participants of the survey to answer using mostly 0–100% or -50-+50% scales for dedicated answers. Second, the actual effect of AI on workflow and outcomes was not assessed quantitatively in this survey study. According to the nature of survey study, this was mainly based on participants’ perceptions of how accurate the AI was. However, because the adapted period was not short and there are very few surveys conducted by various doctors of different departments in an entire hospital unit, we thought that the influence of AI on clinical practice would be reflected in this survey and that this study would be of interest to the readers in the recent status of AI for radiology. Because this survey was performed in 2021, the doctors’ acceptance of and experience with AI could have changed as time passes. Further continuous studies demonstrating how AI has changed actual workflows and the perception of doctors and has influenced clinical outcomes are needed as experience accumulates. We are in the process of demonstrating the effect of AI on diagnostic accuracy or reading time to justify these survey results in an objective way and hope to confirm our findings with quantitative results in the next step of our research.
Conclusions
The real adaptation of an AI-based software for daily chest radiographs received overall positive feedback from clinicians and radiologists in this hospital-wide survey. They preferred to use AI to reduce missed diagnoses, and the most useful location for utilizing the AI was the ER. The survey participants thought that AI could help them to catch important diseases and be adapted for critical diseases effectively. The clinicians and radiologists regarded AI more favorably after actual working with the AI-based software in daily clinical practice.
Supporting information
S1 File. Survey questions for clinicians concerning their experience with AI-based lesion detection software for chest radiographs.
https://doi.org/10.1371/journal.pone.0282123.s001
(DOCX)
S2 File. Comparison of responses between staff and trainees among clinicians.
https://doi.org/10.1371/journal.pone.0282123.s002
(DOCX)
S1 Data. Data file.
https://doi.org/10.1371/journal.pone.0282123.s003
(XLSX)
Acknowledgments
The authors thank Medical Illustration & Design, part of the Medical Research Support Services of Yonsei University College of Medicine, for all artistic support related to this work. The authors also thank Jun Tae Kim for his dedicated help for researchers.
Citation: Shin HJ, Lee S, Kim S, Son N-H, Kim E-K (2023) Hospital-wide survey of clinical experience with artificial intelligence applied to daily chest radiographs. PLoS ONE 18(3): e0282123. https://doi.org/10.1371/journal.pone.0282123
About the Authors:
Hyun Joo Shin
Roles: Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing
Affiliations: Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Gyeonggi-do, Republic of Korea, Center for Digital Health, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Gyeonggi-do, Republic of Korea
Seungsoo Lee
Roles: Data curation, Investigation, Resources, Software, Writing – review & editing
Affiliation: Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Gyeonggi-do, Republic of Korea
Sungwon Kim
Roles: Data curation, Resources, Software
Affiliation: Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
Nak-Hoon Son
Roles: Data curation, Formal analysis, Investigation, Methodology, Project administration, Supervision, Writing – review & editing
Affiliation: Department of Statistics, Keimyung University, Daegu, Republic of Korea
ORICD: https://orcid.org/0000-0002-6192-8852
Eun-Kyung Kim
Roles: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing
E-mail: [email protected]
Affiliations: Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Gyeonggi-do, Republic of Korea, Center for Digital Health, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Gyeonggi-do, Republic of Korea
ORICD: https://orcid.org/0000-0002-3368-5013
1. Choy G, Khalilzadeh O, Michalski M, Do S, Samir AE, Pianykh OS, et al. Current Applications and Future Impact of Machine Learning in Radiology. Radiology. 2018;288: 318–328. pmid:29944078
2. Kulkarni S, Seneviratne N, Baig MS, Khan AHA. Artificial Intelligence in Medicine: Where Are We Now? Acad Radiol. 2020;27: 62–70. pmid:31636002
3. Chassagnon G, Vakalopoulou M, Paragios N, Revel MP. Artificial intelligence applications for thoracic imaging. Eur J Radiol. 2020;123: 108774. pmid:31841881
4. Hwang EJ, Goo JM, Yoon SH, Beck KS, Seo JB, Choi BW, et al. Use of Artificial Intelligence-Based Software as Medical Devices for Chest Radiography: A Position Paper from the Korean Society of Thoracic Radiology. Korean J Radiol. 2021;22: 1743–1748. pmid:34564966
5. Lee SE, Han K, Kim EK. Application of artificial intelligence-based computer-assisted diagnosis on synthetic mammograms from breast tomosynthesis: comparison with digital mammograms. Eur Radiol. 2021. pmid:33710372
6. Hwang J, Yoon HM, Hwang JY, Kim PH, Bak B, Bae BU, et al. Re-Assessment of Applicability of Greulich and Pyle-Based Bone Age to Korean Children Using Manual and Deep Learning-Based Automated Method. Yonsei Med J. 2022;63: 683–691. pmid:35748080
7. Shin HJ, Son NH, Kim MJ, Kim EK. Diagnostic performance of artificial intelligence approved for adults for the interpretation of pediatric chest radiographs. Sci Rep. 2022;12: 10215. pmid:35715623
8. Tariq A, Purkayastha S, Padmanaban GP, Krupinski E, Trivedi H, Banerjee I, et al. Current Clinical Applications of Artificial Intelligence in Radiology and Their Best Supporting Evidence. J Am Coll Radiol. 2020;17: 1371–1381. pmid:33153541
9. van Leeuwen KG, de Rooij M, Schalekamp S, van Ginneken B, Rutten M. How does artificial intelligence in radiology improve efficiency and health outcomes? Pediatr Radiol. 2021. pmid:34117522
10. Huisman M, Ranschaert E, Parker W, Mastrodicasa D, Koci M, Pinto de Santos D, et al. An international survey on AI in radiology in 1,041 radiologists and radiology residents part 1: fear of replacement, knowledge, and attitude. Eur Radiol. 2021;31: 7058–7066. pmid:33744991
11. Coppola F, Faggioni L, Regge D, Giovagnoni A, Golfieri R, Bibbolino C, et al. Artificial intelligence: radiologists’ expectations and opinions gleaned from a nationwide online survey. Radiol Med. 2021;126: 63–71. pmid:32350797
12. Huisman M, Ranschaert E, Parker W, Mastrodicasa D, Koci M, Pinto de Santos D, et al. An international survey on AI in radiology in 1041 radiologists and radiology residents part 2: expectations, hurdles to implementation, and education. Eur Radiol. 2021: 1–10. pmid:33974148
13. Tajaldeen A, Alghamdi S. Evaluation of radiologist’s knowledge about the Artificial Intelligence in diagnostic radiology: a survey-based study. Acta Radiol Open. 2020;9: 2058460120945320. pmid:32821436
14. Eiroa D, Antolín A, Fernández Del Castillo Ascanio M, Pantoja Ortiz V, Escobar M, Roson N. The current state of knowledge on imaging informatics: a survey among Spanish radiologists. Insights Imaging. 2022;13: 34. pmid:35235068
15. Scheetz J, Rothschild P, McGuinness M, Hadoux X, Soyer HP, Janda M, et al. A survey of clinicians on the use of artificial intelligence in ophthalmology, dermatology, radiology and radiation oncology. Sci Rep. 2021;11: 5193. pmid:33664367
16. Di Basilio F, Esposisto G, Monoscalco L, Giansanti D. The Artificial Intelligence in Digital Radiology: Part 2: Towards an Investigation of acceptance and consensus on the Insiders. Healthcare (Basel). 2022;10. pmid:35052316
17. Hwang EJ, Park CM. Clinical Implementation of Deep Learning in Thoracic Radiology: Potential Applications and Challenges. Korean J Radiol. 2020;21: 511–525. pmid:32323497
18. Hwang EJ, Park S, Jin KN, Kim JI, Choi SY, Lee JH, et al. Development and Validation of a Deep Learning-based Automatic Detection Algorithm for Active Pulmonary Tuberculosis on Chest Radiographs. Clin Infect Dis. 2019;69: 739–747. pmid:30418527
19. Sim Y, Chung MJ, Kotter E, Yune S, Kim M, Do S, et al. Deep Convolutional Neural Network-based Software Improves Radiologist Detection of Malignant Lung Nodules on Chest Radiographs. Radiology. 2020;294: 199–209. pmid:31714194
20. Lee S, Shin HJ, Kim S, Kim EK. Successful Implementation of an Artificial Intelligence-Based Computer-Aided Detection System for Chest Radiography in Daily Clinical Practice. Korean J Radiol. 2022. pmid:35762186
21. Nijiati M, Ma J, Hu C, Tuersun A, Abulizi A, Kelimu A, et al. Artificial Intelligence Assisting the Early Detection of Active Pulmonary Tuberculosis From Chest X-Rays: A Population-Based Study. Front Mol Biosci. 2022;9: 874475. pmid:35463963
22. Kim SJ, Roh JW, Kim S, Park JY, Choi D. Current State and Strategy for Establishing a Digitally Innovative Hospital: Memorial Review Article for Opening of Yongin Severance Hospital. Yonsei Med J. 2020;61: 647–651. pmid:32734727
23. Kim EY, Kim YJ, Choi WJ, Jeon JS, Kim MY, Oh DH, et al. Concordance rate of radiologists and a commercialized deep-learning solution for chest X-ray: Real-world experience with a multicenter health screening cohort. PLoS One. 2022;17: e0264383. pmid:35202417
24. Hwang EJ, Park J, Hong W, Lee HJ, Choi H, Kim H, et al. Artificial intelligence system for identification of false-negative interpretations in chest radiographs. Eur Radiol. 2022. pmid:35195744
25. Kim JH, Kim JY, Kim GH, Kang D, Kim IJ, Seo J, et al. Clinical Validation of a Deep Learning Algorithm for Detection of Pneumonia on Chest Radiographs in Emergency Department Patients with Acute Febrile Respiratory Illness. J Clin Med. 2020;9. pmid:32599874
26. Kim EY, Kim YJ, Choi WJ, Lee GP, Choi YR, Jin KN, et al. Performance of a deep-learning algorithm for referable thoracic abnormalities on chest radiographs: A multicenter study of a health screening cohort. PLoS One. 2021;16: e0246472. pmid:33606779
27. Mallio CA, Quattrocchi CC, Beomonte Zobel B, Parizel PM. Artificial intelligence, chest radiographs, and radiology trainees: a powerful combination to enhance the future of radiologists? Quant Imaging Med Surg. 2021;11: 2204–2207. pmid:33937001
28. Li MD, Little BP, Alkasab TK, Mendoza DP, Succi MD, Shepard JO, et al. Multi-Radiologist User Study for Artificial Intelligence-Guided Grading of COVID-19 Lung Disease Severity on Chest Radiographs. Acad Radiol. 2021;28: 572–576. pmid:33485773
29. Jiao Z, Choi JW, Halsey K, Tran TML, Hsieh B, Wang D, et al. Prognostication of patients with COVID-19 using artificial intelligence based on chest x-rays and clinical data: a retrospective study. Lancet Digit Health. 2021;3: e286–e294. pmid:33773969
30. Wichmann JL, Willemink MJ, De Cecco CN. Artificial Intelligence and Machine Learning in Radiology: Current State and Considerations for Routine Clinical Implementation. Invest Radiol. 2020;55: 619–627. pmid:32776769
31. Mongan J, Kalpathy-Cramer J, Flanders A, George Linguraru M. RSNA-MICCAI Panel Discussion: Machine Learning for Radiology from Challenges to Clinical Applications. Radiol Artif Intell. 2021;3: e210118. pmid:34617032
32. Jin KN, Kim EY, Kim YJ, Lee GP, Kim H, Oh S, et al. Diagnostic effect of artificial intelligence solution for referable thoracic abnormalities on chest radiography: a multicenter respiratory outpatient diagnostic cohort study. Eur Radiol. 2022. pmid:34973101
33. Yoo H, Lee SH, Arru CD, Doda Khera R, Singh R, Siebert S, et al. AI-based improvement in lung cancer detection on chest radiographs: results of a multi-reader study in NLST dataset. Eur Radiol. 2021. pmid:34089072
34. Sung J, Park S, Lee SM, Bae W, Park B, Jung E, et al. Added Value of Deep Learning-based Detection System for Multiple Major Findings on Chest Radiographs: A Randomized Crossover Study. Radiology. 2021;299: 450–459. pmid:33754828
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023 Shin et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Purpose
To assess experience with and perceptions of clinical application of artificial intelligence (AI) to chest radiographs among doctors in a single hospital.
Materials and methods
A hospital-wide online survey of the use of commercially available AI-based lesion detection software for chest radiographs was conducted with all clinicians and radiologists at our hospital in this prospective study. In our hospital, version 2 of the abovementioned software was utilized from March 2020 to February 2021 and could detect three types of lesions. Version 3 was utilized for chest radiographs by detecting nine types of lesions from March 2021. The participants of this survey answered questions on their own experience using AI-based software in daily practice. The questionnaires were composed of single choice, multiple choices, and scale bar questions. Answers were analyzed according to the clinicians and radiologists using paired t-test and the Wilcoxon rank-sum test.
Results
One hundred twenty-three doctors answered the survey, and 74% completed all questions. The proportion of individuals who utilized AI was higher among radiologists than clinicians (82.5% vs. 45.9%, p = 0.008). AI was perceived as being the most useful in the emergency room, and pneumothorax was considered the most valuable finding. Approximately 21% of clinicians and 16% of radiologists changed their own reading results after referring to AI, and trust levels for AI were 64.9% and 66.5%, respectively. Participants thought AI helped reduce reading times and reading requests. They answered that AI helped increase diagnostic accuracy and were more positive about AI after actual usage.
Conclusion
Actual adaptation of AI for daily chest radiographs received overall positive feedback from clinicians and radiologists in this hospital-wide survey. Participating doctors preferred to use AI and regarded it more favorably after actual working with the AI-based software in daily clinical practice.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer