Full text

Turn on search term navigation

© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

The diameter of the optic nerve sheath is an important indicator for assessing the intracranial pressure in critically ill patients. The methods for measuring the optic nerve sheath diameter are generally divided into invasive and non-invasive methods. Compared to the invasive methods, the non-invasive methods are safer and have thus gained popularity. Among the non-invasive methods, using deep learning to process the ultrasound images of the eyes of critically ill patients and promptly output the diameter of the optic nerve sheath offers significant advantages. This paper proposes a CBC-YOLOv5s optic nerve sheath ultrasound image segmentation method that integrates both local and global features. First, it introduces the CBC-Backbone feature extraction network, which consists of dual-layer C3 Swin-Transformer (C3STR) and dual-layer Bottleneck Transformer (BoT3) modules. The C3STR backbone’s multi-layer convolution and residual connections focus on the local features of the optic nerve sheath, while the Window Transformer Attention (WTA) mechanism in the C3STR module and the Multi-Head Self-Attention (MHSA) in the BoT3 module enhance the model’s understanding of the global features of the optic nerve sheath. The extracted local and global features are fully integrated in the Spatial Pyramid Pooling Fusion (SPPF) module. Additionally, the CBC-Neck feature pyramid is proposed, which includes a single-layer C3STR module and three-layer CReToNeXt (CRTN) module. During upsampling feature fusion, the C3STR module is used to enhance the local and global awareness of the fused features. During downsampling feature fusion, the CRTN module’s multi-level residual design helps the network to better capture the global features of the optic nerve sheath within the fused features. The introduction of these modules achieves the thorough integration of the local and global features, enabling the model to efficiently and accurately identify the optic nerve sheath boundaries, even when the ocular ultrasound images are blurry or the boundaries are unclear. The Z2HOSPITAL-5000 dataset collected from Zhejiang University Second Hospital was used for the experiments. Compared to the widely used YOLOv5s and U-Net algorithms, the proposed method shows improved performance on the blurry test set. Specifically, the proposed method achieves precision, recall, and Intersection over Union (IoU) values that are 4.1%, 2.1%, and 4.5% higher than those of YOLOv5s. When compared to U-Net, the precision, recall, and IoU are improved by 9.2%, 21%, and 19.7%, respectively.

Details

Title
Optic Nerve Sheath Ultrasound Image Segmentation Based on CBC-YOLOv5s
Author
Chu, Yonghua 1 ; Xu, Jinyang 2 ; Wu, Chunshuang 3 ; Ye, Jianping 1 ; Zhang, Jucheng 4 ; Shen, Lei 2 ; Wang, Huaxia 5 ; Yao, Yudong 6 

 Department of Clinical Engineering, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou 310009, China; [email protected] (Y.C.); [email protected] (J.Y.) 
 College of Communication Engineering, Hangzhou Dianzi University, Hangzhou 310018, China; [email protected] 
 Department of Emergency Medicine, School of Medicine, Second Affiliated Hospital, Zhejiang University, Hangzhou 310009, China; [email protected]; Key Laboratory of The Diagnosis and Treatment of Severe Trauma and Burn of Zhejiang Province, Zhejiang Province Clinical Research Center for Emergency and Critical Care Medicine, Hangzhou 310009, China 
 Key Laboratory of Medical Molecular Imaging of Zhejiang Province, Department of Clinical Engineering, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou 310009, China 
 Department of Electrical and Computer Engineering, Rowan University, Glassboro, NJ 08028, USA; [email protected] 
 Department of Electrical and Computer Engineering, The Stevens Institute of Technology, Hoboken, NJ 07030, USA; [email protected] 
First page
3595
Publication year
2024
Publication date
2024
Publisher
MDPI AG
e-ISSN
20799292
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3110458724
Copyright
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.