Abstract

Successful ultrasound-guided supraclavicular block (SCB) requires the understanding of sonoanatomy and identification of the optimal view. Segmentation using a convolutional neural network (CNN) is limited in clearly determining the optimal view. The present study describes the development of a computer-aided diagnosis (CADx) system using a CNN that can determine the optimal view for complete SCB in real time. The aim of this study was the development of computer-aided diagnosis system that aid non-expert to determine the optimal view for complete supraclavicular block in real time. Ultrasound videos were retrospectively collected from 881 patients to develop the CADx system (600 to the training and validation set and 281 to the test set). The CADx system included classification and segmentation approaches, with Residual neural network (ResNet) and U-Net, respectively, applied as backbone networks. In the classification approach, an ablation study was performed to determine the optimal architecture and improve the performance of the model. In the segmentation approach, a cascade structure, in which U-Net is connected to ResNet, was implemented. The performance of the two approaches was evaluated based on a confusion matrix. Using the classification approach, ResNet34 and gated recurrent units with augmentation showed the highest performance, with average accuracy 0.901, precision 0.613, recall 0.757, f1-score 0.677 and AUROC 0.936. Using the segmentation approach, U-Net combined with ResNet34 and augmentation showed poorer performance than the classification approach. The CADx system described in this study showed high performance in determining the optimal view for SCB. This system could be expanded to include many anatomical regions and may have potential to aid clinicians in real-time settings.

Trial registration The protocol was registered with the Clinical Trial Registry of Korea (KCT0005822, https://cris.nih.go.kr).

Details

Title
Optimal view detection for ultrasound-guided supraclavicular block using deep learning approaches
Author
Jo, Yumin 1 ; Lee, Dongheon 2 ; Baek, Donghyeon 3 ; Choi, Bo Kyung 4 ; Aryal, Nisan 4 ; Jung, Jinsik 1 ; Shin, Yong Sup 1 ; Hong, Boohwi 5 

 Chungnam National University and Hospital, Department of Anaesthesiology and Pain Medicine, College of Medicine, Daejeon, Republic of Korea (GRID:grid.254230.2) (ISNI:0000 0001 0722 6377) 
 Chungnam National University and Hospital, Department of Biomedical Engineering, College of Medicine, Daejeon, Republic of Korea (GRID:grid.254230.2) (ISNI:0000 0001 0722 6377); Chungnam National University Hospital, Biomedical Research Institute, Daejeon, Republic of Korea (GRID:grid.411665.1) (ISNI:0000 0004 0647 2279) 
 Chungnam National University College of Medicine, Daejeon, Republic of Korea (GRID:grid.254230.2) (ISNI:0000 0001 0722 6377) 
 MTEG Co., Ltd, Seoul, Republic of Korea (GRID:grid.254230.2) 
 Chungnam National University and Hospital, Department of Anaesthesiology and Pain Medicine, College of Medicine, Daejeon, Republic of Korea (GRID:grid.254230.2) (ISNI:0000 0001 0722 6377); Chungnam National University Hospital, Biomedical Research Institute, Daejeon, Republic of Korea (GRID:grid.411665.1) (ISNI:0000 0004 0647 2279) 
Pages
17209
Publication year
2023
Publication date
2023
Publisher
Nature Publishing Group
e-ISSN
20452322
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2875659195
Copyright
© The Author(s) 2023. corrected publication 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.