It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
The purpose of this study was to propose a continuity-aware contextual network (Canal-Net) for the automatic and robust 3D segmentation of the mandibular canal (MC) with high consistent accuracy throughout the entire MC volume in cone-beam CT (CBCT) images. The Canal-Net was designed based on a 3D U-Net with bidirectional convolutional long short-term memory (ConvLSTM) under a multi-task learning framework. Specifically, the Canal-Net learned the 3D anatomical context information of the MC by incorporating spatio-temporal features from ConvLSTM, and also the structural continuity of the overall MC volume under a multi-task learning framework using multi-planar projection losses complementally. The Canal-Net showed higher segmentation accuracies in 2D and 3D performance metrics (p < 0.05), and especially, a significant improvement in Dice similarity coefficient scores and mean curve distance (p < 0.05) throughout the entire MC volume compared to other popular deep learning networks. As a result, the Canal-Net achieved high consistent accuracy in 3D segmentations of the entire MC in spite of the areas of low visibility by the unclear and ambiguous cortical bone layer. Therefore, the Canal-Net demonstrated the automatic and robust 3D segmentation of the entire MC volume by improving structural continuity and boundary details of the MC in CBCT images.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 Seoul National University, Interdisciplinary Program in Bioengineering, Graduate School of Engineering, Seoul, Korea (GRID:grid.31501.36) (ISNI:0000 0004 0470 5905)
2 Seoul National University, Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul, Korea (GRID:grid.31501.36) (ISNI:0000 0004 0470 5905)
3 Vision AI Business Team, LG CNS, Seoul, Korea (GRID:grid.464630.3) (ISNI:0000 0001 0696 9566)
4 Seoul National University, Department of Periodontology, School of Dentistry and Dental Research Institute, Seoul, Korea (GRID:grid.31501.36) (ISNI:0000 0004 0470 5905)
5 Hansung University, Department of Electronics and Information Engineering, Seoul, Korea (GRID:grid.444079.a) (ISNI:0000 0004 0532 678X)
6 Seoul National University, Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul, Korea (GRID:grid.31501.36) (ISNI:0000 0004 0470 5905)
7 Seoul National University, Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul, Korea (GRID:grid.31501.36) (ISNI:0000 0004 0470 5905); Seoul National University, Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul, Korea (GRID:grid.31501.36) (ISNI:0000 0004 0470 5905)