Introduction
Recently, advances have been realized in gastrointestinal endoscopy.1 The proper application of endoscopy can display real-time visualizations of gastrointestinal tracts and contribute to the early detection of digestive diseases, particularly gastric cancer (GC).2,3 Early diagnosis and therapy can improve prognostic outcomes, reduce correlated adverse events, and save medical costs in the long run.4,5 Endoscopic quality is a prerequisite for high detection rates of gastrointestinal areas, which can be determined based on the testing time, professional operator skills, experience in identifying lesions, and many other factors.6,7 Notably, a full inspection of gastrointestinal tract is considerably important for effective gastrointestinal lesion detection.8 Failure to map all gastric anatomy sites is correlated with many human factors, such as inexperience, fatigue, and bias, which can be avoided and solved. Therefore, much more efforts should be made to fully depict entire stomachs and standard gastroscopies without blind spots.
Emerging evidence has highlighted the critical role of artificial intelligence (AI) in endoscopy.9 Abundant data consisting of numerous images and videos are generated during endoscopic processes, prompting the priority application of AI in digestive diseases. The fast advances in computational processing technologies and blooming of algorithms have facilitated the deep learning (DL) based AI in endoscopy, which requires physicians to have a deeper understanding of AI and how it may impact the medical field in the future. DL, a self-learning model without human indications, has been revealed to extract critical features and quantities through using back propagation algorithm and changing the internal parameters of each neural network layer. Convolutional neural network (CNN), a kind of DL method, has become an increasingly popular tool in the field of endoscopy, including esophagogastroduodenoscopy (EGD), colonoscopy, and etc. Emerging studies have reported that CNN-based program could automatically detect lesions and recognize different anatomical locations in captured images and video data. Tomonori et al developed a CNN-based model and made comparisons between its utility and the existing QuickView model in terms of their capacities for the detection of various abnormalities using capsule endoscopy (CE) video. The detection rates of this developed CNN-based system for different abnormalities, such as mucosal breaks, angioectasia, protruding lesions, and blood content were 100%, 97%, 98%, and 100%, respectively, which was obviously higher than that of the QuickView mode,10 He et al proposed DL-based anatomy site classification methods for EGD images, and compared the performances of the most-commonly used CNN architectures, such as ResNet-50, Inception-v3, VGG-11-bn, VGG-16-bn and DenseNet-121. The performance of DenseNet-121 was better as compared with other tested CNN models, with the average overall accuracy of 88.11%.11 Current studies have indicated that AI has aided in the early detection of digestive diseases, sometimes even outperforming the diagnoses of physicians.9,12 Proper visualization of stomachs can be improved by employing AI systems.13 It is known that endoscopic physicians receive professional training to acquire practical skills. However, such training programs mostly rely on traditional approaches and need great manpower, materials, and time.14 Strikingly, the introduction of AI offers a unique method of standardized training, upgrades the quality of endoscopic operation, and improves accuracy.15 Moreover, it functions as an effective tool to monitor and perfect entire operating processes and relieve the work pressure on physicians.16,17 Notably, the majority of the published studies regarding AI and endoscopy focus on involving AI-based endoscopy in identifying lesions. However, limited attention has been paid to the potential of AI-based systems in recognizing anatomic sites through endoscopy.
In this study, using a CNN, we aimed to construct a novel AI-based gastroscopy system called AIMED to analyze EGD images and distinguish gastric locations during endoscopic procedures.
Methods
Datasets Preparation
The patients considered in this study underwent endoscopy examinations at the Peking University Cancer Hospital from June 2020 to December 2021. The Ethics Committee approved the study at the Peking University Cancer Hospital on February 21, 2020 (ethics board protocol number 2020KT02) under the clinical trial registration number NCT04384575 (12/05/2020). The following endoscopes were used: GIF-H290, GIF-HQ290, GIF-H260, GIF-Q260 (Olympus, Japan), EG-760Z, EG-760R, EG-L600ZW7, EGL600WR7, and EG-580R7 (Fujifilm, Japan). All the EGD images used came from stored available data at the Peking University Cancer Hospital.
Two networks were used for training in this research. The first network aims to identify EGD sites, whereas the second network defines the endoscope inside or outside a human body.
Model 1: The images captured during gastroscopy were divided into 27 sites according to previously reported guidelines8,18,19 with a total of 160,308 EGD images from stored available data of over 2000 patients, for training the network of classifying EGD sites. Each category was randomly sampled divided into a training set and a verification set in a ratio of 9:1, as shown in Table 1.
Model 2: A dataset containing 42,030 in vivo images and 22,305 in vitro images was used to train this network, which can automatically identify whether an endoscope is inside or outside a human body, and it can also calculate and record endoscopic operation time. Representative images of this network were shown in online Supplementary Figure S1. The labelled images were randomly sampled divided into a training set and a verification set in a ratio of 9:1.
Since the data acquired from gastroscopy is typically in the form of dynamic video, motion blur often occurs, leading to unclear images of stationary parts, as shown in Figure 1. The left image shows a clear view of the esophagus at rest, while the right image displays a blurred esophagus. To address this, the data acquisition method involves extracting frames from the video and labeling both clear and blurred images of the same region for model training. This approach significantly enhances the model’s sensitivity compared to training on clear images alone.
Figure 1 Clear esophagus at rest on the left and blurred esophagus on the right.
Model Architecture
Adam’s optimization method was adopted in the model, with a learning rate of 0.02, an attenuation rate of 0.001, and a batch size of 32 and trained about 300 epochs. In addition, due to the imbalance of the training samples among different categories, cross-entropy losses with different weights of training data were added for different categories. The categories with small data amounts are more important when updating parameters. The weights of the updating parameters were smaller for the categories with large data. The methods of defining the weights of different categories are as follows: For example, if there are three categories, there are a samples in category 1, b samples in category 2 and c samples in category 3 in the training set, then the weight of category 1 is (a+b+c)/a, the weight of category 2 is (a+b+c)/b, and the weight of category 3 is (a+b+c)/c.
The areas around the images do not contribute to the recognition and should be removed. First, convert the image to grayscale. Second, the image was binarized. Then, extract the largest contour in the binary image. Finally, the outer rectangle of the largest contour area was resized to 400 × 400 (Figure 2).
Here, we show the heat map of features extracted by the model. Take the dentate line as an example. We show four groups of images, each one containing two. All of their true labels are dentate line. The left is the original image, and the right is the heat map generated by the model, as shown in Figure 3. We could see the model learned the main features of the dentate line.
To improve speed and achieve high accuracy, MobileNetV3-large was adopted as a backbone. Figure 4 shows the overall structure of the model. The parts were divided into 27 sites, where K = 27. What’s more, Figure 5 shows some modifications of the model. (a) Inverted residual structures with a linear bottleneck as a Bneck structure. (b) Squeeze-and-excitation layer (SE-layer), learning the weight between channels. (c) After the feature map, a nonlocal module was added to obtain global information and enlarge the model’s vision. (d) The anti-Aliasing; all MaxPooling (stride), Conv (stride), and AvgPool in the model were replaced by corresponding BlurPooling to improve the model’s performance.
Figure 5 Characteristics in the model included. (a) The structure of Bneck: when the stride = 1, the block is shown on the left; when the stride = 2, the block is shown on the right; (b) squeeze-and-excitation layer; (c) nonlocal module; (d) use of anti-aliased instead of the baseline.
Based on the Ubuntu 20.04 system, the CPU was i5-10400F (Intel, USA), and the GPU was GeForce RTX 3070 (NVIDIA USA).
Definition and Study Endpoints
Three professional, experienced physicians (5–10 years of EGD experience) independently labeled annotation for the same image or the same frame from an endoscopic video into 27 different sites. Then, to alleviate the incorporated bias of single physician, the final image tag would result from a label made by no less than two physicians. Study endpoints included the sensitivity, specificity, and accuracy of AIMED with regard to classifying the EGD images into 27 sites. Sensitivity = true positive/(true positive + false negative); specificity = true negative/(true negative + false positive); and accuracy = (true positive + true negative)/total number of cases.
Statistical Analysis
The primary outcome measures included sensitivity, specificity, and accuracy. The P values <0.05 were considered to be statistically significant. SPSS 22.0 (Chicago, IL, USA) was employed for all the statistical analyses.
Results
Performance of AIMED for Images
To evaluate AIMED’s performance in vivo and in vitro for recording endoscopic operation time, the test group, which includes 6504 EGD images (ie, 4203 in vivo images and 2301 in vitro images), was used to calculate the sensitivity, specificity, and accuracy. AIMED’s sensitivity, specificity, and accuracy were 98.1%, 99.5%, and 98.9%, respectively.
Additionally, we tested 16031 images to evaluate the performance of AIMED with regard to identifying gastric sites, obtaining the accuracy, sensitivity, and specificity of every gastric location, and obtaining the confusion matrix, as shown in Figure 6. Table 2 shows the detailed results. The average accuracy of 27 EGD sites was 99.40%. Notably, AIMED showed individual accuracy for the EGD sites ranging from 98.63% to 99.89%. AIMED’s average sensitivity and specificity for EGD location recognition were 91.85% and 99.69%, ranging from 80.61% to 98.04% and from 99.10% to 99.97%, respectively.
Figure 6 Confusion matrix for classifying images into 27 sites. Elements (x, y) in the matrix represent the number of the predicted category (x) based on the true category (y), where the numbers represent data shown inTable 1.
Considering the pretreatment time, the total prediction time of the model was less than 17ms; thus, it could predict 60 frames per second, meeting the needs of real-time clinical identification.
Discussion
GC is the third leading cause of cancer deaths worldwide.20,21 Recently, gastrointestinal endoscopy has been identified as an important tool for cancer diagnosis and therapy, particularly for patients with early gastric cancer (EGC).20,21 It is known that the quality of gastroscope is a prerequisite for the high detection rate of gastrointestinal areas.22 Poor-quality endoscopy can lead to misdiagnoses, and patients may lose the best time for treatment. Therefore, managing the quality of gastrointestinal endoscopy is essential for improving early detection. Currently, the fast development of AI-based systems has brought dramatic changes to alter the traditional medical practice.14
A complete observation of gastrointestinal tract is of great significance for effectively determining lesions, as tumor occurrences may be uncertainly observed in different gastrointestinal tract regions.23 Quality assurance committees worldwide have set various recommendations for image documentation in gastrointestinal endoscopy to promote complete examination of gastrointestinal tract. It was indicated that Japanese standard guidelines required at least 22 images of the stomach.18 The European Society of Gastrointestinal Endoscopy (ESGE) suggested that the number of pictures captured per procedure should be at least 10.8,19 These mentioned guidelines are highlighted to get a full view of the upper gastrointestinal tract regions, thus improving the quality of endoscopy. The AIMED system achieved high accuracy in recognizing gastric anatomy sites, and it could assist the operator to master the progress of endoscopic performances, including examined regions, untested parts and exhausted time. Its application could contribute to a more standardized endoscopy pattern.
Recently, studies have reported that EGD has more advantages with regard to detecting the lesions distributed at the junction of the hypopharynx and esophagus than the laryngoscope.24–26 Moreover, the full inspection of the hypopharynx and upper esophageal sphincter is much more difficult due to the shorter duration time at this site caused by nausea. Importantly, this may lead to the misdiagnosis of cancer in both. Meanwhile, missed detections can be easily found in sections such as the esophagus, lower curvature of the gastric cardia, and posterior wall.13,27 Chang et al found that the lesions in pharynx, gastric angle, gastric retroflex view, gastric antrum, and first portion of the duodenum are likely to be missed.6 Several studies have reported AI involvement in the management of endoscopy performance. According to previous studies, the combination of the deep CNN (DCCN) and long short-term memory was used in recognizing certain parts of the gastrointestinal tract.13 He et al developed a deep learning-based anatomy site classification approach for EGD pictures. The regions of interest (ROI) data were divided into 12 classes according to the proposed guidelines and the British guidelines, under two conditions: “not available” (NA) and “available” (A), resulting in the generation of four different datasets forms. Then, the most-commonly used CNN models were involved to be pre-trained and test their performances on the four datasets, such as ResNet-50, Inception-v3, VGG-11-bn, VGG-16-bn and DenseNet-121. The performance of DenseNet-121 was better as compared with other tested CNN models, with the average overall accuracy of 88.11%. Further, it was found that CNN model without NA outperformed their counterparts with NA by 8.87% and 8.67% overall accuracies.11 By contrast, the average accuracy of our developed AIMED in classifying 27 gastric sites was 99.40%. The individual accuracy of AIMED for the gastric sites was ranging from 98.63% to 99.89%. A striking study directed by Wu et al developed a system based on DCCN to detect EGC and distinguish gastric locations on a par with the expert level.28 A grid stomach model was involved to recognize the existence of blind spots during EGD. DCCN could achieve an accuracy of 90% or 65.9% in the tasks of classifying gastric locations into 10 or 26 parts, respectively. The accuracy of DCCN was also compared with that of expert, seniors and novices, and the performances of the expert level achieved an accuracy of 90.2% or 63.8% in the tasks of classifying gastric locations into 10 or 26 parts, respectively. This real-time model helped ensure the observation of the whole stomach, thus offering an essential prerequisite for EGC detection. Next, Wu et al further enriched knowledge regarding non-blind spot monitoring systems during EGD in another study.29 It was indicated that the rates of blind spots obviously decreased in the WISENSE group compared with the controls (5.86% vs 22.46, p<0.001). These findings suggested the efficacy of WISENSE as an assistant endoscopic tool. Shortly after, Wu et al developed ENDOANGEL based on DCCN and DRL and verified its ability in a multicenter randomized controlled trial, including 498 and 504 patients in ENDOANGEL and in the control group, respectively.30 Compared with the control group, the number of blind spots dropped from 9.82 to 5.38 with the use of ENDOANGEL. In another study performed by Hirotoshi et al, a CNN-based diagnostic program was constructed.31 The receiver operating characteristics analysis showed the performance of trained CNN in classifying the anatomical location of EGD images, where the area under the curves (AUCs) was 1.00 for the larynx and esophagus and was 0.99 for the stomach and duodenum. Specifically, the trained CNN was determined to distinguish different gastric anatomy sites, with AUCs of 0.99 for the upper, middle, and lower sections. Besides, Seong et al reported a CNN model that could classify EGD images into one of the eight regions of upper gastrointestinal tracts with an accuracy of 97.58%.22 These observations suggested that using AI-based systems may reduce blind spots and positively affect endoscopic quality. These studies are often in their early stages, which lack validation in large-scale clinical trials. There is an urgent need to determine the clinical value of certain AI-based systems in clinical trials, and the precise identification of each section of the gastrointestinal tract is absolutely a challenge. Thus, much more emphasis should be performed on the involvement of AI in the quality control of endoscopy. Strikingly, our research team has been striving to develop a superb AI-based system to identify gastric sites without blind spots. A total of 160,308 images obtained during endoscopy were randomly split into a training set and a validation set according to a ratio of 9:1. To develop the AI-based system, the MobileNetV3-large model was introduced, and it could improve the processing speed and accuracy. We found that the validation set’s accuracy, specificity, and sensitivity were 99.40%, 99.69%, and 91.85%, respectively. These observations strongly highlight AIMED’s potential in classifying different anatomy sites of the stomach. Identifying a different stomach area may clearly show the current inspection status and effectively monitor the blind spots in real-time during endoscopic operations.
The employment of an AI-based system could timely record the part being inspected and provide tips for physicians. The area that was fully examined was marked in yellow. The operator could visually notice the untested area and master the actual number of tested parts. Our findings strongly verified the effectiveness of AI-based model, which could timely reflect the testing status of the entire stomach. Previous studies have highlighted the close association between endoscopy quality and inspection time.27,32 The guideline proposed by ESGE indicated that the EGD performing time is best when it is more than 7 minutes.27,32 Many factors could contribute to the insufficiency or inaccuracy of the testing time, such as intensive schedules, manual recording, and inadequate skills. Notably, the developed AI-based model could record the operating time and function as a useful reminder.
Moreover, the analysis results may be influenced by the operator’s skill, experience, and familiarity with AI-related knowledge. Furthermore, the physicians’ attitudes toward AI-based systems may also affect the results, emphasizing the need for AI-related training and practice for each physician. White-light imaging has been identified as a standard protocol for detecting gastric areas.33 However, rapid advancements in endoscopic technology has introduced new techniques such as narrow-band imaging, blue-laser imaging, and linked-color imaging, which could enhance viewing quality and color contrast, thereby increasing the identification of gastric areas or lesions.33 Thus, image enhancement and scope configuration, as well as the structure-weighted level and color-enhancement capabilities of endoscopic systems, should be considered to optimize examination. In this study, the exclusive use of white-light imaging may have affected visibility and posed challenges in detecting different gastric anatomy sites. Although high accuracy rates were obtained, further improvements in image quality and operator skill may be enhance the efficacy of AI in future research.
The integration of advanced endoscopy with AI technology holds promise for benefiting patients and improving operational standards. Further explorations of other endoscopy types combined with the AI-based system are already planned for future studies. The AI algorithm used in this study exhibited great potential in identifying different gastric anatomy sites, which were also verified in a clinical trial. However, the current study has limitations, including a relatively small sample size and single-center design, which may limit the statistical power of the findings. Larger sample sizes and data from multiple centers are needed to confirm the algorithm’s efficacy in other settings, as diverse data can mitigate overfitting issues and improve generalizability.
Additionally, the dynamic video data obtained during endoscopic procedures frequently contained motion blur and “noise” from artifacts like reflections, foam, mucus, and folds. Such “noise” could obscure visibility and confuse the operator. To address this, frames were extracted from the video, and non-contributory areas were removed and processed. Previous studies have demonstrated that localization and segmentation can be effective in minimizing noise by shielding targeted images.34–36 Massive efforts have been made to reduce noise during endoscopic operations. This study focused on evaluating the AI-based system’s effectiveness in identifying different anatomic locations. Besides, future studies will further investigate AI’s potential in identifying gastric lesions. From a broader perspective, the application of this system greatly improve lesion detection, bringing substantial benefits to both physicians and patients.
Conclusion
The AI-based system could accurately and efficiently identify different gastric anatomy sites and display the real-time inspection status, supporting operator in achieving comprehensive stomach detection and enhancing the quality control of endoscopy.
Abbreviations
AIMED, Artificial Intelligence of Medicine; GC, gastric cancer; AI, artificial intelligence; CNN, convolutional neural network; EGD, esophagogastroduodenoscopy; CE, capsule endoscopy; EGC, early gastric cancer; ESGE, European Society of Gastrointestinal Endoscopy; DCCN, deep CNN; ROI, regions of interest; AUCs, area under the curves.
Data Sharing Statement
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Ethics Approval and Consent to Participate
The studies involving human participants were reviewed and approved by the institutional review board of the Peking University Cancer Hospital (ethics board protocol number 2020KT02). The patients provided their written informed consents to participate in this study. All procedures performed in this study involving human participants were in accordance with the ethical standards of the institutional research committee and with the 1964 helsinki Declaration and its later amendments.
Acknowledgments
We appreciate the hard work of the colleagues at the database center of our hospital who helped us reserve and search the relative data. Also, thank you for Enago editing service.
Author Contributions
All authors made a significant contribution to the work reported, whether that is in the conception, study design, execution, acquisition of data, analysis and interpretation, or in all these areas; took part in drafting, revising or critically reviewing the article; gave final approval of the version to be published; have agreed on the journal to which the article has been submitted; and agree to be accountable for all aspects of the work.
Funding
This study was funded by the Beijing Municipal Administration of Hospitals Incubating Program (PX2020047), the Beijing Hospitals Authority Clinical Medicine Development of Special Funding Support (XMLX202143), the Capital’s Funds for Health Improvement and Research (2020-2-2155), and Science Foundation of Peking University Cancer Hospital (202207).
Disclosure
The authors declare that they have no competing interests.
1. Lee W. Application of current image-enhanced endoscopy in gastric diseases. Clin Endosc. 2021;54(4):477–487. doi:10.5946/ce.2021.160
2. Choi J, Shin K, Jung J, et al. Convolutional neural network technology in endoscopic imaging: artificial intelligence for endoscopy. Clin Endosc. 2020;53(2):117–126. doi:10.5946/ce.2020.054
3. Guo L, Gong H, Wang Q, et al. Detection of multiple lesions of gastrointestinal tract for endoscopy using artificial intelligence model: a pilot study. Surg Endosc. 2021;35(12):6532–6538. doi:10.1007/s00464-020-08150-x
4. Zong L, Abe M, Seto Y, Ji J. The challenge of screening for early gastric cancer in China. Lancet. 2016;388(10060):2606. doi:10.1016/S0140-6736(16)32226-7
5. Uno Y. Prevention of gastric cancer by Helicobacter pylori eradication: a review from Japan. Cancer Med. 2019;8(8):3992–4000. doi:10.1002/cam4.2277
6. Chang YY, Yen HH, Li PC, et al. Upper endoscopy photodocumentation quality evaluation with novel deep learning system. Dig Endosc. 2022;34(5):994–1001. doi:10.1111/den.14179
7. Cohen J, Pike IM. Defining and measuring quality in endoscopy. Gastrointest Endosc. 2015;81(1):1–2. doi:10.1016/j.gie.2014.07.052
8. Rey JF, Lambert R. ESGE recommendations for quality control in gastrointestinal endoscopy: guidelines for image documentation in upper and lower GI endoscopy. Endoscopy. 2001;33(10):901–903. doi:10.1055/s-2001-42537
9. Tziortziotis I, Laskaratos FM, Coda S. Role of artificial intelligence in video capsule endoscopy. Diagnostics. 2021;11(7):1192. doi:10.3390/diagnostics11071192
10. Aoki T, Yamada A, Kato Y, et al. Automatic detection of various abnormalities in capsule endoscopy videos by a deep learning-based system: a multicenter study. Gastrointest Endosc. 2021;93(1):165–173e161. doi:10.1016/j.gie.2020.04.080
11. He Q, Bano S, Ahmad OF, et al. Deep learning-based anatomical site classification for upper gastrointestinal endoscopy. Int J Comput Assist Radiol Surg. 2020;15(7):1085–1094. doi:10.1007/s11548-020-02148-5
12. Le Berre C, Sandborn WJ, Aridhi S, et al. Application of artificial intelligence to gastroenterology and hepatology. Gastroenterology. 2020;158(1):76–94.e72. doi:10.1053/j.gastro.2019.08.058
13. Li YD, Zhu SW, Yu JP, et al. Intelligent detection endoscopic assistant: an artificial intelligence-based system for monitoring blind spots during esophagogastroduodenoscopy in real-time. Dig Liver Dis. 2021;53(2):216–223. doi:10.1016/j.dld.2020.11.017
14. Zhuang H, Bao A, Tan Y, et al. Application and prospect of artificial intelligence in digestive endoscopy. Expert Rev Gastroenterol Hepatol. 2022;16(1):21–31. doi:10.1080/17474124.2022.2020646
15. Igarashi S, Sasaki Y, Mikami T, Sakuraba H, Fukuda S. Anatomical classification of upper gastrointestinal organs under various image capture conditions using AlexNet. Comput Biol Med. 2020;124:103950. doi:10.1016/j.compbiomed.2020.103950
16. Liu Y, Lin D, Li L, et al. Using machine-learning algorithms to identify patients at high risk of upper gastrointestinal lesions for endoscopy. J Gastroenterol Hepatol. 2021;36(10):2735–2744. doi:10.1111/jgh.15530
17. Barua I, Vinsard DG, Jodal HC, et al. Artificial intelligence for polyp detection during colonoscopy: a systematic review and meta-analysis. Endoscopy. 2021;53(3):277–284. doi:10.1055/a-1201-7165
18. Yao K. The endoscopic diagnosis of early gastric cancer. Ann Gastroenterol. 2013;26(1):11–22.
19. Bisschops R, Areia M, Coron E, et al. Performance measures for upper gastrointestinal endoscopy: a European Society of Gastrointestinal Endoscopy (ESGE) quality improvement initiative. Endoscopy. 2016;48(9):843–864. doi:10.1055/s-0042-113128
20. Arnold M, Park JY, Camargo MC, Lunet N, Forman D, Soerjomataram I. Is gastric cancer becoming a rare disease? A global assessment of predicted incidence trends to 2035. Gut. 2020;69:823–829. doi:10.1136/gutjnl-2019-320234
21. Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2018;68(6):394–424. doi:10.3322/caac.21492
22. Choi SJ, Khan MA, Choi HS, et al. Development of artificial intelligence system for quality control of photo documentation in esophagogastroduodenoscopy. Surg Endosc. 2022;36(1):57–65. doi:10.1007/s00464-020-08236-6
23. Nagtegaal ID, Odze RD, Klimstra D, et al. The 2019 WHO classification of tumours of the digestive system. Histopathology. 2020;76(2):182–188. doi:10.1111/his.13975
24. Di L, Fu KI, Xie R, et al. A modified endoscopic submucosal dissection for a superficial hypopharyngeal cancer: a case report and technical discussion. BMC Cancer. 2017;17(1):712. doi:10.1186/s12885-017-3685-7
25. Kuwabara T, Hiyama T, Oka S, et al. Clinical features of pharyngeal intraepithelial neoplasias and outcomes of treatment by endoscopic submucosal dissection. Gastrointest Endosc. 2012;76(6):1095–1103. doi:10.1016/j.gie.2012.07.032
26. Muto M, Satake H, Yano T, et al. Long-term outcome of transoral organ-preserving pharyngeal endoscopic resection for superficial pharyngeal cancer. Gastrointest Endosc. 2011;74(3):477–484. doi:10.1016/j.gie.2011.04.027
27. Moon HS. Improving the endoscopic detection rate in patients with early gastric cancer. Clin Endosc. 2015;48(4):291–296. doi:10.5946/ce.2015.48.4.291
28. Wu L, Zhou W, Wan X, et al. A deep neural network improves endoscopic detection of early gastric cancer without blind spots. Endoscopy. 2019;51(6):522–531. doi:10.1055/a-0855-3532
29. Wu L, Zhang J, Zhou W, et al. Randomised controlled trial of WISENSE, a real-time quality improving system for monitoring blind spots during esophagogastroduodenoscopy. Gut. 2019;68(12):2161–2169. doi:10.1136/gutjnl-2018-317366
30. Wu L, He X, Liu M, et al. Evaluation of the effects of an artificial intelligence system on endoscopy quality and preliminary testing of its performance in detecting early gastric cancer: a randomized controlled trial. Endoscopy. 2021;53(12):1199–1207. doi:10.1055/a-1350-5583
31. Takiyama H, Ozawa T, Ishihara S, et al. Automatic anatomical classification of esophagogastroduodenoscopy images using deep convolutional neural networks. Sci Rep. 2018;8(1):7497. doi:10.1038/s41598-018-25842-6
32. Teh JL, Tan JR, Lau LJ, et al. Longer examination time improves detection of gastric cancer during diagnostic upper gastrointestinal endoscopy. Clin Gastroenterol Hepatol. 2015;13(3):480–487.e482. doi:10.1016/j.cgh.2014.07.059
33. Zhang Q, Wang F, Chen ZY, et al. Comparison of the diagnostic efficacy of white light endoscopy and magnifying endoscopy with narrow band imaging for early gastric cancer: a meta-analysis. Gastric Cancer. 2016;19(2):543–552. doi:10.1007/s10120-015-0500-5
34. Wu Z, Ge R, Wen M, et al. ELNet:Automatic classification and segmentation for esophageal lesions using convolutional neural network. Med Image Anal. 2021;67:101838. doi:10.1016/j.media.2020.101838
35. Yang B, Chen W, Luo H, Tan Y, Liu M, Wang Y. Neuron image segmentation via learning deep features and enhancing weak neuronal structures. IEEE J Biomed Health Inform. 2021;25(5):1634–1645. doi:10.1109/JBHI.2020.3017540
36. Hiroyasu T, Hayashinuma K, Ichikawa H, Yagi N. Preprocessing with image denoising and histogram equalization for endoscopy image analysis using texture analysis. Annu Int Conf IEEE Eng Med Biol Soc. 2015;2015:789–792. doi:10.1109/EMBC.2015.7318480
Peng Yuan,1,* Zhong-Hua Ma,2,* Yan Yan,1,* Shi-Jie Li,2 Jing Wang,2 Qi Wu1
1State Key Laboratory of Holistic Integrative Management of Gastrointestinal Cancers, Beijing Key Laboratory of Carcinogenesis and Translational Research, Department of Endoscopy, Peking University Cancer Hospital & Institute, Beijing, 100142, People’s Republic of China; 2Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Endoscopy, Peking University Cancer Hospital & Institute, Beijing, 100142, People’s Republic of China
*These authors contributed equally to this work
Correspondence: Qi Wu, State Key Laboratory of Holistic Integrative Management of Gastrointestinal Cancers, Beijing Key Laboratory of Carcinogenesis and Translational Research, Department of Endoscopy, Peking University Cancer Hospital & Institute, Beijing, 100142, People’s Republic of China, Email [email protected]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024. This work is licensed under https://creativecommons.org/licenses/by-nc/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Background: A full examination of gastrointestinal tract is an essential prerequisite for effectively detecting gastrointestinal lesions. However, there is a lack of efficient tools to analyze and recognize gastric anatomy locations, preventing the complete portrayal of entire stomach. This study aimed to evaluate the effectiveness of artificial intelligence in identifying gastric anatomy sites by analyzing esophagogastroduodenoscopy images.
Methods: Using endoscopic images, we proposed a system called the Artificial Intelligence of Medicine (AIMED) through convolutional neural networks and MobileNetV3-large. The performance of artificial intelligence in the recognition of anatomic sites in esophagogastroduodenoscopy images was evaluated by considering many cases. Primary outcomes included diagnostic accuracy, sensitivity, and specificity.
Results: A total of 160,308 images from 27 categories of the upper endoscopy anatomy classification were included in this retrospective research. As a test group, 16031 esophagogastroduodenoscopy images with 27 categories were used to evaluate AIMED’s performance in identifying gastric anatomy sites. The convolutional neural network’s accuracy, sensitivity, and specificity were determined to be 99.40%, 91.85%, and 99.69%, respectively.
Conclusion: The AIMED system achieved high accuracy with regard to recognizing gastric anatomy sites, and it could assist the operator in enhancing the quality control of the used endoscope. Moreover, it could contribute to a more standardized endoscopic performance. Overall, our findings prove that artificial-intelligence-based systems can be indispensable to the endoscopic revolution (Clinical trial registration number: NCT04384575 (12/05/2020)).
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer