Abstract

We aim to develop a deep-learning-based method for automatic proximal femur segmentation in quantitative computed tomography (QCT) images. We proposed a spatial transformation V-Net (ST-V-Net), which contains a V-Net and a spatial transform network (STN) to extract the proximal femur from QCT images. The STN incorporates a shape prior into the segmentation network as a constraint and guidance for model training, which improves model performance and accelerates model convergence. Meanwhile, a multi-stage training strategy is adopted to fine-tune the weights of the ST-V-Net. We performed experiments using a QCT dataset which included 397 QCT subjects. During the experiments for the entire cohort and then for male and female subjects separately, 90% of the subjects were used in ten-fold stratified cross-validation for training and the rest of the subjects were used to evaluate the performance of models. In the entire cohort, the proposed model achieved a Dice similarity coefficient (DSC) of 0.9888, a sensitivity of 0.9966 and a specificity of 0.9988. Compared with V-Net, the Hausdorff distance was reduced from 9.144 to 5.917 mm, and the average surface distance was reduced from 0.012 to 0.009 mm using the proposed ST-V-Net. Quantitative evaluation demonstrated excellent performance of the proposed ST-V-Net for automatic proximal femur segmentation in QCT images. In addition, the proposed ST-V-Net sheds light on incorporating shape prior to segmentation to further improve the model performance.

Details

Title
ST-V-Net: incorporating shape prior into convolutional neural networks for proximal femur segmentation
Author
Zhao, Chen 1 ; Keyak, Joyce H. 2 ; Tang, Jinshan 3 ; Kaneko, Tadashi S. 4 ; Khosla, Sundeep 5 ; Amin, Shreyasee 6 ; Atkinson, Elizabeth J. 7 ; Zhao, Lan-Juan 8 ; Serou, Michael J. 9 ; Zhang, Chaoyang 10 ; Shen, Hui 8 ; Deng, Hong-Wen 8 ; Zhou, Weihua 3   VIAFID ORCID Logo 

 Michigan Technological University, Department of Applied Computing, Houghton, USA (GRID:grid.259979.9) (ISNI:0000 0001 0663 5937) 
 University of California, Irvine, Department of Radiological Sciences, Department of Mechanical and Aerospace Engineering, Department of Biomedical Engineering, and Chao Family Comprehensive Cancer Center, Irvine, USA (GRID:grid.266093.8) (ISNI:0000 0001 0668 7243) 
 Michigan Technological University, Department of Applied Computing, Houghton, USA (GRID:grid.259979.9) (ISNI:0000 0001 0663 5937); Michigan Technological University, Center of Biocomputing and Digital Health, Institute of Computing and Cybersystems, and Health Research Institute, Houghton, USA (GRID:grid.259979.9) (ISNI:0000 0001 0663 5937) 
 University of California, Irvine, Department of Radiological Sciences, Irvine, USA (GRID:grid.266093.8) (ISNI:0000 0001 0668 7243) 
 Mayo Clinic, Division of Endocrinology, Department of Medicine, Rochester, USA (GRID:grid.66875.3a) (ISNI:0000 0004 0459 167X) 
 Mayo Clinic, Division of Epidemiology, Department of Health Sciences Research, and Division of Rheumatology, Department of Medicine, Rochester, USA (GRID:grid.66875.3a) (ISNI:0000 0004 0459 167X) 
 Mayo Clinic, Division of Biomedical Statistics and Informatics, Department of Health Sciences Research, Rochester, USA (GRID:grid.66875.3a) (ISNI:0000 0004 0459 167X) 
 Tulane University, School of Medicine, Division of Biomedical Informatics and Genomics, Tulane Center of Biomedical Informatics and Genomics, Deming Department of Medicine, New Orleans, USA (GRID:grid.265219.b) (ISNI:0000 0001 2217 8588) 
 Tulane University School of Medicine, Department of Radiology, New Orleans, USA (GRID:grid.265219.b) (ISNI:0000 0001 2217 8588) 
10  University of Southern Mississippi, School of Computing Sciences and Computer Engineering, Hattiesburg, USA (GRID:grid.267193.8) (ISNI:0000 0001 2295 628X) 
Pages
2747-2758
Publication year
2023
Publication date
Jun 2023
Publisher
Springer Nature B.V.
ISSN
21994536
e-ISSN
21986053
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2825544159
Copyright
© The Author(s) 2021. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.