1. Introduction
The rotator cuff stabilizes the glenohumeral joint during movement by compressing the humeral head against the glenoid [1]. The rotator cuff comprises the supraspinatus, infraspinatus, teres minor, and subscapularis muscles. Rotator cuff tears are the most likely source of shoulder pain in adults [2,3]. The incidence of rotator cuff tears is increasing with the improving life expectancy and it may affect up to 20–40% of people according to the report [4]. Although the exact pathogenesis remains controversial, a combination of intrinsic and extrinsic factors is likely responsible for most rotator cuff tears. Arthroscopic rotator cuff repair has become the standard care for rotator cuff tears [5,6]. At times, distinguishing a rotator cuff tear from other conditions, such as adhesive capsulitis, solely through physical examinations can be challenging. Therefore, imaging modalities play crucial roles in diagnosing rotator cuff tears. Both magnetic resonance imaging (MRI) and ultrasonography (US) are the best noninvasive modalities for identifying and evaluating rotator cuff lesions [7,8]. MRI allows for the evaluation of entire cuff lesions with a sufficient field of view, while US provides a limited window for rotator cuff tendons and is largely dependent on the operator’s skill and experience. As rotator cuff tendons are curved structures surrounding the humeral head, a single imaging plane has limitations in evaluating the entire cuff lesions. Some lesions may be well visualized in the coronal plane and some may be visualized in the sagittal or axial plane. Due to the anatomical and pathological complexities, even experienced musculoskeletal radiologists require attention and time to interpret shoulder MRIs. In addition to increasing incidence, advancements in scanning techniques have reduced scan times, leading to an increased number of examinations within a limited timeframe, and resulting in a considerable increase in the number of MRIs that need to be read. Despite the increase in the number of shoulder MRI scans, there is an insufficient number of experienced musculoskeletal radiologists, both in terms of spatial distribution and availability over time, from a realistic perspective. On a positive note, the growing number of shoulder MRI examinations can provide a wealth of data for developing automated deep learning models for MRI interpretation.
With the advent of deep learning techniques, numerous models have been applied to screen and assist in labor-intensive radiological tasks in musculoskeletal imaging, such as bone age assessment in the hand or elbow, fracture detection in axial or peripheral skeletons, arthritis grading in knee or sacroiliac joints, muscle quality quantification, muscle and bone segmentation in various sites, and the clinical prediction of outcomes [9,10,11,12,13]. Most of these tasks are time-consuming processes and some of them may even be impossible for a human radiologist to conduct. In shoulder MRI, the diagnosis of rotator cuff tear and the quantification of rotator cuff muscle degeneration are common indications for applying deep learning techniques as well as imaging time acceleration [14,15,16,17,18,19,20]. Shoulder MRI typically consists of over a hundred images from various sequences and imaging planes, which takes considerable time for interpretation. One of the primary roles of shoulder MRI is to screen for rotator cuff tears, and several previous studies have utilized deep learning techniques for rotator cuff tear detection in shoulder MRI [21,22,23,24,25]. Despite the good performances of reported studies, they have limitations in terms of the input data and labeling methods that can be applied in clinical practice. They used only coronal images or nonfat-suppressed images or classified them based on operational records, and did not consider subscapularis and infraspinatus tears. Because a rotator cuff tear can be obscured in a single imaging plane according to its location and size, evaluations in all planes are required.
This study aimed to develop and validate a screening model for detecting a rotator cuff tear in all three planes of routine shoulder MRI using a deep neural network (DNN).
2. Materials and Methods
This study was approved by the Institutional Review Board of Korea University Anam Hospital. Shoulder MRI scans were conducted between January 2010 and September 2019. All shoulder MRIs were performed using 3-Tesla machines, including Magnetom TrioTrim, Skyra, and Prisma (Siemens, Erlangen, Germany), as well as Achieva (Philips, Best, The Netherlands). The shoulder MRIs were conducted with a dedicated shoulder coil, with patients in the supine position and their shoulder joints neutrally positioned, with palms facing upward. These scans included at least one fat-suppressed axial, coronal, and sagittal imaging plane, with the imaging planes set to be orthogonal to the glenohumeral joint. The exclusion criteria comprised individuals under 20 years of age, contrast-enhanced examinations, arthrograpic examinations, postoperative images, and poor image quality due to factors such as motion artifacts and improper shoulder positioning. To ensure the highest standards of image quality, two board-certified musculoskeletal radiologists, each with more than 3 years of experience, assessed the appropriateness of each image. This assessment was based on both the radiologic reports and, on occasion, the images themselves. All images were stored in the Digital Imaging and Communications in Medicine (DICOM) format, which is a standard format for medical images, and they underwent a thorough anonymization process to protect patient privacy.
2.1. Image Labeling
Three board-certified musculoskeletal radiologists categorized the images as either “tear” or “no tear”, with evident full or partial fiber disruption of the tendon categorized as a “tear” and a normal tendon fiber or simple signal change of the tendon without fiber disruption regarded as “no tear”. All rotator cuff tears located in the supraspinatus, infraspinatus, teres minor, and subscapularis were meticulously examined in all axial, coronal, and sagittal planes of the shoulder MRI scans. Torn tendon spaces were segmented by trained researchers under the supervision of radiologists using AIX 2.0.2 (JLK Inc., Seoul, Republic of Korea). The flowchart of the methodology is demonstrated in Figure 1.
The segmentation process involved the creation of freeform lines outlining all rotator cuff tears, encompassing the supraspinatus, infraspinatus, and subscapularis, within all three imaging planes of fat-suppressed T2-weighted or proton density-weighted images (Figure 2). The cross-link function provided by the software assisted in identifying the corresponding point in the coronal image, which corresponds to the sagittal and axial images. In cases of multiple lesions, each rotator cuff tear was segmented separately. Subsequent to the segmentation procedure, rectangular patches were automatically generated, encompassing irregularly shaped torn segments. These patches were then utilized for the implementation of the model.
2.2. Model Implementation
The dataset was randomly divided into 70% for training, 10% for tuning, and 20% for the final evaluation. The algorithm was designed to detect and predict the rotator cuff tear. We used the original architecture of you only look once (YOLO) v8 [26,27] with a higher frequency of occlusion and small spatial sizes to improve the detection performance in shoulder MRI. This network was deeply fine-tuned and trained with regions of interest (ROIs) of the shoulder lesions and normal. After training, we examined the location and classification of lesions in the test sets. The primary purpose of YOLO v8 [26,27] involved partitioning each image using an S × S grid. The preceding iterations of YOLO v8, such as a novel neural network structure, incorporated both the Feature Pyramid Network (FPN) and Path Aggregation Network (PAN), along with an innovative annotation tool streamlining the labeling procedure. This annotation tool has multiple beneficial functionalities, including automated labeling, labeling shortcuts, and adaptable hotkeys. The amalgamation of these attributes simplified the process of annotating images for model training. The detection outcome should achieve a score of 0.5 or higher to emphasize the significance of both classification and detection [27]. All the datasets were resized to 512 × 512 pixels for training and inference. To enhance the performance of the model, the training datasets were preprocessed via histogram matching to align the histogram distributions across all images. In addition, all images underwent intensity normalization, which involved subtracting the mean and dividing it by the standard deviation. Resizing was achieved using third-order spline interpolation with linear interpolation. Furthermore, various image augmentation techniques were employed, including adjustments to the brightness, contrast, Gaussian noise, blur, inversion, and sharpness, and geometric modifications such as shifting, zooming, and rotation. These augmentations were employed to mitigate biases specific to scanners and bolster the resilience of neural networks against additional sources of variability unrelated to radiological categories. The tuning loss plateaued after an epoch, and the model with the lowest tuning loss was selected using the ADAM optimizer [28]. The structure of the model is illustrated in Figure 3.
These datasets were loaded onto a Graphics Processing Unit (GPU) devbox server with Ubuntu 20.04, CUDA 11.2, and cuDNN 11.1 (NVIDIA Corporation, Santa Clara, CA, USA), which is part of the NVIDIA deep learning software development kit (version 11.1). The GPU server contained four 48 GB A6000. We used an initial learning rate of 0.001 that decayed by a factor of 10 each time.
2.3. Statistical Analysis
We calculated the area under the curve (AUC) for the receiver operating characteristic (ROC) curve and accuracy using the pROC (version 1.10) package in R (version 1.42; R Foundation for Statistical Computing, Vienna, Austria). DeLong tests were performed to compare the AUC values of the eight classifier models using the pROC package in R version 1.42. Statistical significance was set at a two-sided p < 0.05.
3. Results
3.1. Subject Demographics
A total of 794 shoulder MRI scans were included (374 men and 420 women; aged 59 ± 11 years). Out of these, 100 subjects had no evidence of rotator cuff tear, while the remaining 694 had a rotator cuff tear. We extracted a total of 8756 image patches from patients with a confirmed rotator cuff tear and 2052 patches from those with no rotator cuff tears. The data distribution is presented in Table 1.
3.2. Performance of the Model
We first evaluated the performance of the model using the intersection of union (IOU) and confidence score (classification value of lesions) to evaluate the accuracy between the predicted bounding box and ground truth. If the IOU was over 0.5, the predicted lesions in test dataset were defined as correct. In addition, we used non-maximum suppression (NMS) to remove duplicate boxes for the inference of tears. To evaluate the detection performance based on YOLO v8, the cutoff threshold (0.2) was determined using the sensitivity and average false positives in the first algorithm.
The highest AUC was achieved when all imaging planes were used (0.94), and this difference was statistically significant when compared to each individual imaging plane (p = 0.0002, 0.00006, and 0.00002, respectively). Sensitivity, precision, and accuracy were also the highest in the model with all-plane training. As a single imaging plane, the axial plane showed the highest AUC (0.71), followed by the sagittal (0.70) and axial (0.68) planes. The highest accuracy was achieved when using all imaging planes (96%). Regarding accuracy with a single imaging plane, the sagittal plane showed the highest accuracy (70%), outperforming the axial and coronal planes (58% and 55%, respectively). The performance of the model is summarized in Table 2, and the ROC curves for the model using all imaging planes and each individual imaging plane are demonstrated in Figure 4.
4. Discussion
In this study, we developed a screening algorithm based on YOLO v8 [26,27] to predict rotator cuff tear in shoulder MRI using high-quality datasets confirmed by expert radiologists. We used whole MRI images as the input data and used patch images drawn by musculoskeletal radiologists to train and fine-tune our algorithms. The advantage of this network is that it can simultaneously predict rotator cuff tear at various locations. It is important to determine whether the detection ability of the algorithm is similar to that of the expert radiologists in a computer-aided detection and diagnosis system. To the best of our knowledge, this is the first study to screen rotator cuff tear at all locations in all imaging planes.
The use of AI, especially deep learning techniques, has been introduced in various fields of musculoskeletal imaging, including radiography, computed tomography (CT), MRI, and US. The integration of deep learning techniques into radiography has yielded promising outcomes. Studies have shown its capability for swift and precise bone age assessment in hand or elbow radiographs, fracture detection across diverse anatomical regions, and the grading of osteoarthritis in knee radiographs [9,10,11,29]. In shoulder imaging, Kim et al. suggested using the deep learning model for ruling out rotator cuff tear in a shoulder radiograph, which redefined the role of a conventional radiograph [30]. Lee et al. reported a deep-learning-based model for analyzing a rotator cuff tear using ultrasound imaging [31]. Studies on quantifying rotator cuff muscle quality using deep learning has primarily relied on CTs and MRIs and have shown promising results. [16,32]. These tasks are recognized as labor-intensive, time-consuming, and, in some cases, even impossible for radiologists to perform. In the context of shoulder MRI, it is understandable that the evaluation of rotator cuff tears presents another promising application for deep learning, especially considering its increasing number of examinations and the lack of experts [21,22,23,24,25].
Shoulder MRI is difficult to interpret even by clinicians because of the anatomic complexity of the shoulder joint with small curved tendons and ligament structures. All three planes should be examined carefully because a partial volume averaging effect can obscure the lesion when referring to only a single imaging plane [33,34]. Although several studies have applied deep learning techniques to interpret shoulder MRI to diagnose rotator cuff tears, there have been limitations owing to the quality of the input data regarding the imaging sequences, imaging planes, and reference standards [21,22,23,24,25]. Kim et al. [21] and Sezer et al. [22] proposed a model for classifying rotator cuff tears from MRI, but their models were trained using only coronal images. Shim et al. [23] reported a rotator cuff tear classification model using a 3D convolutional neural network using three plane images. However, the labeling was based on the arthroscopic finding and used the DeOrio and Cofield classification system [35], which usually evaluates supraspinatus tears. Yao et al. [24] proposed a deep learning model for detecting only supraspinatus tears on T2-weighted coronal images. The far anterior portion of the supraspinatus or the far posterior part of the infraspinatus is not orthogonally perpendicular to the coronal plane, resulting in an unclear delineation of rotator cuff tears in these locations in the coronal images. This phenomenon applies to other imaging planes and to other rotator cuff areas as well. Many previous studies focused only on the supraspinatus tendon or did not mention subscapularis tears, which might have been overlooked and sometimes described as “hidden lesions” [36]. Although the supraspinatus is the most common location of rotator cuff tears, the subscapularis tendon, which is best seen in the sagittal and axial planes, should be included in screening. Our model detects rotator cuff tears in all imaging planes and assists in the diagnosis of rotator cuff tears within the numerous images found in shoulder MRI. This capability is potentially valuable for both diagnosis and treatment planning.
In our model implementation, we utilized the YOLOv8 model. In the preliminary evaluation, we compared the DenseNet classification model with the YOLOv8 model. However, the performance of the DenseNet model (AUC: 0.93; accuracy: 0.90) was not superior to the YOLOv8 model in the validation set. Despite several limitations, such as a lower accuracy in detecting small targets and substantial computational power requirements for feature extraction, YOLO is a powerful object detection algorithm that can be applied in various fields, notably in medical applications encompassing radiology, oncology, and surgery [37]. By rapidly identifying and localizing lesions or anatomical structures, YOLO has significantly improved patient outcomes while reducing diagnosis and treatment times and enhancing the efficiency and accuracy of medical diagnoses and procedures [37]. Recently, a completely new repository, which includes YOLOv8, has been introduced for the YOLO model. This repository serves as an integrated framework for training object detection, instance segmentation, and image classification models. YOLOv8 is a recent addition to the YOLO series and stands out as an anchor-free model. Unlike previous versions that rely on anchor box offsets, YOLOv8 directly predicts the centers of objects, resulting in faster NMS speeds. The model provides outputs, including box coordinates, confidence scores, and class labels (lesions). Despite the known drawbacks of the YOLO model, the YOLOv8 model has been used in various medical image applications in the field of radiology. In studies involving radiography and MRI, these models have demonstrated high accuracy in detecting conditions such as osteochondritis dissecans in elbow radiographs, identifying foreign objects in chest radiographs, and detecting tumors in brain MRI scans [38,39,40].
In this study, the model that was trained with all imaging planes exhibited the best performance (AUC: 0.94), while the model that was trained with a single imaging plane demonstrated a relatively lower performance (AUC: 0.71–0.68). Sensitivity, precision, and accuracy were also the highest in the model with all-plane training. Although the variation in the number of training images could be a contributing factor, the distinct shapes of tears in different imaging planes might contribute to enhancing the model’s rotator cuff tear detection performance. Furthermore, despite the small difference, the axial plane displayed the highest performance among the single imaging planes. This finding is intriguing, as the coronal or sagittal plane is generally preferred for rotator cuff tear detection, given that supraspinatus tears are the most common and well visualized in the coronal or sagittal planes [41]. To interpret the results differently, it might be possible that axial images contain more information about rotator cuff tears than conventionally believed. Human readers tend to focus on specific imaging planes when the rotator cuff tear is evident; however, the deep learning model independently screens all images and assesses the presence of tears. This functionality will assist radiologists in the labor-intensive and time-consuming process of MRI interpretation. In addition, if it is possible to find AI-driven imaging biomarkers for rotator cuff tears in axial planes, it might be an additional value for deep learning research.
Our preliminary study had several limitations. Firstly, we only conducted an internal validation test. Since our dataset comprised routine MRI protocols from various machines and vendors, it exhibited a significant degree of heterogeneity. Nonetheless, external validation using shoulder MRIs from other machines or institutions with concrete reference standards by multiple readers is necessary to validate our results. Additionally, a reader study comparing the model with human experts might also be required. Secondly, while our methods demonstrated good performance in terms of the AUC (0.94), achieving an enhanced specificity score is crucial for clinical applications. These challenges can potentially be addressed through the utilization of larger datasets, diverse augmentations, and algorithm enhancements. Thirdly, we did not specify the anatomical location of the rotator cuff tear, such as whether it affected the supraspinatus, infraspinatus, or subscapularis. Our model was primarily designed to screen for rotator cuff tears in numerous shoulder MRI images, and as such, it did not include the nomination of anatomical locations in its labeling. However, for practical clinical use by general physicians and orthopedic surgeons, specifying the location in addition to detecting the lesion is essential. With the additional detailed labeling or application of an automated anatomic labeling algorithm [42], the next version of the model can provide information about the location and size of the rotator cuff tear. Finally, our model combined both full-thickness and partial-thickness tears under the rotator cuff tear category. Subclassifying tears into full-thickness and partial-thickness categories may be necessary, as clinical decision making can vary based on the tear thickness. To address these issues, further development involving a larger dataset and more detailed labeling that includes the class of tear thickness is warranted. Since different grading systems are applied to supraspinatus and subscapularis tears, it might be necessary to take a step-wise approach: initially screening for rotator cuff tears using our preliminary model and subsequently classifying the tear details including the location and thickness using secondary models.
5. Conclusions
Our deep-learning-based automatic rotator cuff tear screening model effectively aided in the detection of rotator cuff tears across all three image planes. With the increasing number of shoulder MRI scans and a growing demand for lesion detection support, a deep learning model can effectively assist in detecting rotator cuff tears.
Conceptualization, K.-S.A., H.-J.P. and Y.C.; data curation, K.-S.A., H.-J.P., Y.-S.K. and S.L.; formal analysis, K.-S.A. and Y.C.; investigation, C.H.K.; methodology, K.-S.A., H.-J.P. and Y.C.; software, H.-J.P., Y.C. and D.K.; supervision, K.-S.A. and C.H.K.; validation, K.-C.L. and Y.C.; visualization, K.-S.A., H.-J.P. and Y.C.; writing—original draft, K.-C.L. and Y.C.; writing—review and editing, K.-S.A. and C.H.K. All authors have read and agreed to the published version of the manuscript.
This study was approved by the Institutional Review Board and Ethics Committee of the Korea University Anam Hospital (IRB number: 2021AN0300).
Informed consent was waived because the data were collected retrospectively and analyzed anonymously.
The raw/processed MR image dataset analyzed in this study is not publicly available.
We thank Hyun Ki Ko for his contributions to this research in terms of data acquisition and preparation.
The authors declare no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 2. Segmentation of torn rotator cuff tendons on all three imaging planes. The segmentation is performed by drawing freeform lines (red) outlining all rotator cuff tears, including the supraspinatus, infraspinatus, and subscapularis, within all three imaging planes. Multiple areas of rotator cuff tears were segmented separately.
Figure 4. ROC curves for the model using all imaging planes (red) and using only axial (blue), sagittal (green), and coronal (black) images.
The number of the study participants and image patches.
Subjects | Training | Tuning | Testing | ||
---|---|---|---|---|---|
No RCT |
Number of Patches | 1511 | 150 | 391 | |
Plane | Axial | 566 | 51 | 152 | |
Coronal | 362 | 37 | 86 | ||
Sagittal | 583 | 62 | 153 | ||
RCT |
Number of Patches | 6427 | 795 | 1534 | |
Plane | Axial | 753 | 237 | 435 | |
Coronal | 2415 | 289 | 547 | ||
Sagittal | 2233 | 269 | 552 |
RCT: rotator cuff tear.
The performance of the rotator cuff tear detection model for shoulder MRI.
AUC | Sensitivity | Specificity | Precision | Accuracy | F1 Score | |
---|---|---|---|---|---|---|
ALL | 0.94 | 98% | 91% | 98% | 96% | 97% |
Axial | 0.71 | 51% | 100% | 100% | 58% | 68% |
Sagittal | 0.70 | 72% | 63% | 92% | 70% | 81% |
Coronal | 0.68 | 48% | 95% | 98% | 55% | 64% |
References
1. Maruvada, S.; Madrazo-Ibarra, A.; Varacallo, M. Anatomy, Rotator Cuff. StatPearls; StatPearls Publishing LLC.: Treasure Island, FL, USA, 2023.
2. Zoga, A.C.; Kamel, S.I.; Hynes, J.P.; Kavanagh, E.C.; O’Connor, P.J.; Forster, B.B. The Evolving Roles of MRI and Ultrasound in First-Line Imaging of Rotator Cuff Injuries. AJR Am. J. Roentgenol.; 2021; 217, pp. 1390-1400. [DOI: https://dx.doi.org/10.2214/AJR.21.25606] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34161130]
3. Yamamoto, A.; Takagishi, K.; Osawa, T.; Yanagawa, T.; Nakajima, D.; Shitara, H.; Kobayashi, T. Prevalence and risk factors of a rotator cuff tear in the general population. J. Shoulder Elbow Surg.; 2010; 19, pp. 116-120. [DOI: https://dx.doi.org/10.1016/j.jse.2009.04.006] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/19540777]
4. Via, A.G.; De Cupis, M.; Spoliti, M.; Oliva, F. Clinical and biological aspects of rotator cuff tears. Muscles Ligaments Tendons J.; 2013; 3, pp. 70-79. [DOI: https://dx.doi.org/10.11138/mltj/2013.3.2.070] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/23888289]
5. Pandey, V.; Jaap Willems, W. Rotator cuff tear: A detailed update. Asia Pac. J. Sports Med. Arthrosc. Rehabil. Technol.; 2015; 2, pp. 1-14. [DOI: https://dx.doi.org/10.1016/j.asmart.2014.11.003] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29264234]
6. Rho, J.Y.; Kwon, Y.S.; Choi, S. Current Concepts and Recent Trends in Arthroscopic Treatment of Large to Massive Rotator Cuff Tears: A Review. Clin. Shoulder Elb.; 2019; 22, pp. 50-57. [DOI: https://dx.doi.org/10.5397/cise.2019.22.1.50]
7. Morag, Y.; Jacobson, J.A.; Miller, B.; De Maeseneer, M.; Girish, G.; Jamadar, D. MR imaging of rotator cuff injury: What the clinician needs to know. RadioGraphics; 2006; 26, pp. 1045-1065. [DOI: https://dx.doi.org/10.1148/rg.264055087]
8. Sharma, G.; Bhandary, S.; Khandige, G.; Kabra, U. MR Imaging of Rotator Cuff Tears: Correlation with Arthroscopy. J. Clin. Diagn. Res.; 2017; 11, pp. TC24-TC27. [DOI: https://dx.doi.org/10.7860/JCDR/2017/27714.9911]
9. Ahn, K.S.; Bae, B.; Jang, W.Y.; Lee, J.H.; Oh, S.; Kim, B.H.; Lee, S.W.; Jung, H.W.; Lee, J.W.; Sung, J. et al. Assessment of rapidly advancing bone age during puberty on elbow radiographs using a deep neural network model. Eur. Radiol.; 2021; 31, pp. 8947-8955. [DOI: https://dx.doi.org/10.1007/s00330-021-08096-1]
10. Lee, K.C.; Choi, I.C.; Kang, C.H.; Ahn, K.S.; Yoon, H.; Lee, J.J.; Kim, B.H.; Shim, E. Clinical Validation of an Artificial Intelligence Model for Detecting Distal Radius, Ulnar Styloid, and Scaphoid Fractures on Conventional Wrist Radiographs. Diagnostics; 2023; 13, 1657. [DOI: https://dx.doi.org/10.3390/diagnostics13091657]
11. Zhang, B.; Jia, C.; Wu, R.; Lv, B.; Li, B.; Li, F.; Du, G.; Sun, Z.; Li, X. Improving rib fracture detection accuracy and reading efficiency with deep learning-based detection software: A clinical evaluation. Br. J. Radiol.; 2021; 94, 20200870. [DOI: https://dx.doi.org/10.1259/bjr.20200870]
12. Saeed, M.U.; Dikaios, N.; Dastgir, A.; Ali, G.; Hamid, M.; Hajjej, F. An automated deep learning approach for spine segmentation and vertebrae recognition using computed tomography images. Diagnostics; 2023; 13, 2658. [DOI: https://dx.doi.org/10.3390/diagnostics13162658] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37627917]
13. Medina, G.; Buckless, C.G.; Thomasson, E.; Oh, L.S.; Torriani, M. Deep learning method for segmentation of rotator cuff muscles on MR images. Skeletal Radiol.; 2021; 50, pp. 683-692. [DOI: https://dx.doi.org/10.1007/s00256-020-03599-2] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32939590]
14. Familiari, F.; Galasso, O.; Massazza, F.; Mercurio, M.; Fox, H.; Srikumaran, U.; Gasparini, G. Artificial intelligence in the management of rotator cuff tears. Int. J. Environ. Res. Public Health; 2022; 19, 16779. [DOI: https://dx.doi.org/10.3390/ijerph192416779]
15. Kim, J.Y.; Ro, K.; You, S.; Nam, B.R.; Yook, S.; Park, H.S.; Yoo, J.C.; Park, E.; Cho, K.; Cho, B.H. et al. Development of an automatic muscle atrophy measuring algorithm to calculate the ratio of supraspinatus in supraspinous fossa using deep learning. Comput. Methods Programs Biomed.; 2019; 182, 105063. [DOI: https://dx.doi.org/10.1016/j.cmpb.2019.105063]
16. Ro, K.; Kim, J.Y.; Park, H.; Cho, B.H.; Kim, I.Y.; Shim, S.B.; Choi, I.Y.; Yoo, J.C. Deep-learning framework and computer assisted fatty infiltration analysis for the supraspinatus muscle in MRI. Sci. Rep.; 2021; 11, 15065. [DOI: https://dx.doi.org/10.1038/s41598-021-93026-w] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34301978]
17. Riem, L.; Feng, X.; Cousins, M.; DuCharme, O.; Leitch, E.B.; Werner, B.C.; Sheean, A.J.; Hart, J.; Antosh, I.J.; Blemker, S.S. A Deep Learning Algorithm for Automatic 3D Segmentation of Rotator Cuff Muscle and Fat from Clinical MRI Scans. Radiol. Artif. Intell.; 2023; 5, e220132. [DOI: https://dx.doi.org/10.1148/ryai.220132] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37035430]
18. Hess, H.; Ruckli, A.C.; Bürki, F.; Gerber, N.; Menzemer, J.; Burger, J.; Schär, M.; Zumstein, M.A.; Gerber, K. Deep-Learning-Based Segmentation of the Shoulder from MRI with Inference Accuracy Prediction. Diagnostics; 2023; 13, 1668. [DOI: https://dx.doi.org/10.3390/diagnostics13101668]
19. Gupta, P.; Haeberle, H.S.; Zimmer, Z.R.; Levine, W.N.; Williams, R.J.; Ramkumar, P.N. Artificial intelligence-based applications in shoulder surgery leaves much to be desired: A systematic review. JSES Rev. Rep. Tech.; 2023; 3, pp. 189-200. [DOI: https://dx.doi.org/10.1016/j.xrrt.2022.12.006]
20. Hahn, S.; Yi, J.; Lee, H.J.; Lee, Y.; Lee, J.; Wang, X.; Fung, M. Comparison of deep learning-based reconstruction of PROPELLER Shoulder MRI with conventional reconstruction. Skeletal Radiol.; 2023; 52, pp. 1545-1555. [DOI: https://dx.doi.org/10.1007/s00256-023-04321-8]
21. Kim, M.; Park, H.M.; Kim, J.Y.; Kim, S.H.; Hoeke, S.; De Neve, W. MRI-based diagnosis of rotator cuff tears using deep learning and weighted linear combinations. Proceedings of the Machine Learning for Healthcare Conference, PMLR 2020; Virtual Event, 7–8 August 2020; pp. 292-308.
22. Sezer, A.; Sezer, H.B. Capsule network-based classification of rotator cuff pathologies from MRI. Comput. Electr. Eng.; 2019; 80, 106480. [DOI: https://dx.doi.org/10.1016/j.compeleceng.2019.106480]
23. Shim, E.; Kim, J.Y.; Yoon, J.P.; Ki, S.Y.; Lho, T.; Kim, Y.; Chung, S.W. Automated rotator cuff tear classification using 3D convolutional neural network. Sci. Rep.; 2020; 10, 15632. [DOI: https://dx.doi.org/10.1038/s41598-020-72357-0] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32973192]
24. Yao, J.; Chepelev, L.; Nisha, Y.; Sathiadoss, P.; Rybicki, F.J.; Sheikh, A.M. Evaluation of a deep learning method for the automated detection of supraspinatus tears on MRI. Skeletal Radiol.; 2022; 51, pp. 1765-1775. [DOI: https://dx.doi.org/10.1007/s00256-022-04008-6] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35190850]
25. Lin, C.C.; Wang, C.N.; Ou, Y.K.; Fu, J. Combined image enhancement, feature extraction, and classification protocol to improve detection and diagnosis of rotator-cuff tears on MR imaging. Magn. Reson. Med. Sci.; 2014; 13, pp. 155-166. [DOI: https://dx.doi.org/10.2463/mrms.2013-0079] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/24990467]
26. Redmon, J.; Farhadi, A. YOLO 9000: Better, faster, stronger. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Honolulu, HI, USA, 21–26 July 2017; pp. 7263-7271.
27. Reis, D.; Kupec, J.; Hong, J.; Daoudi, A. Real-Time Flying Object Detection with YOLOv8. arXiv; 2023; arXiv: 2305.09972
28. Zhang, Z. Improved adam optimizer for deep neural networks. Proceedings of the 2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS); Banff, AB, Canada, 4–6 June 2018; pp. 1-2. [DOI: https://dx.doi.org/10.1109/IWQoS.2018.8624183]
29. Gyftopoulos, S.; Lin, D.; Knoll, F.; Doshi, A.M.; Rodrigues, T.C.; Recht, M.P. Artificial Intelligence in Musculoskeletal Imaging: Current Status and Future Directions. AJR Am. J. Roentgenol.; 2019; 213, pp. 506-513. [DOI: https://dx.doi.org/10.2214/AJR.19.21117] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31166761]
30. Kim, Y.; Choi, D.; Lee, K.J.; Kang, Y.; Ahn, J.M.; Lee, E.; Lee, J.W.; Kang, H.S. Ruling out rotator cuff tear in shoulder radiograph series using deep learning: Redefining the role of conventional radiograph. Eur. Radiol.; 2020; 30, pp. 2843-2852. [DOI: https://dx.doi.org/10.1007/s00330-019-06639-1]
31. Lee, K.; Kim, J.Y.; Lee, M.H.; Choi, C.H.; Hwang, J.Y. Imbalanced Loss-Integrated Deep-Learning-Based Ultrasound Image Analysis for Diagnosis of Rotator-Cuff Tear. Sensors; 2021; 21, 2214. [DOI: https://dx.doi.org/10.3390/s21062214]
32. Taghizadeh, E.; Truffer, O.; Becce, F.; Eminian, S.; Gidoin, S.; Terrier, A.; Farron, A.; Büchler, P. Deep learning for the rapid automatic quantification and characterization of rotator cuff muscle degeneration from shoulder CT datasets. Eur. Radiol.; 2021; 31, pp. 181-190. [DOI: https://dx.doi.org/10.1007/s00330-020-07070-7]
33. Goh, C.K.; Peh, W.C. Pictorial essay: Pitfalls in magnetic resonance imaging of the shoulder. Can. Assoc. Radiol. J.; 2012; 63, pp. 247-259. [DOI: https://dx.doi.org/10.1016/j.carj.2011.02.005]
34. Marcon, G.F.; Macedo, T.A. Artifacts and pitfalls in shoulder magnetic resonance imaging. Radiol. Bras.; 2015; 48, pp. 242-248. [DOI: https://dx.doi.org/10.1590/0100-3984.2013.0006]
35. Takeuchi, N.; Kozono, N.; Nishii, A.; Matsuura, K.; Ishitani, E.; Onizuka, T.; Mizuki, Y.; Kimura, T.; Yuge, H.; Uchimura, T. et al. Prevalence and predisposing factors of neuropathic pain in patients with rotator cuff tears. J. Orthop. Sci.; 2023; [DOI: https://dx.doi.org/10.1016/j.jos.2022.10.015] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36609035]
36. Neyton, L.; Daggett, M.; Kruse, K.; Walch, G. The hidden lesion of the subscapularis: Arthroscopically revisited. Arthrosc. Tech.; 2016; 5, pp. e877-e881. [DOI: https://dx.doi.org/10.1016/j.eats.2016.04.010]
37. Qureshi, R.; Ragab, M.G.; Abdulkader, S.J.; Alqushaib, A.; Sumiea, E.H.; Alhussian, H. A Comprehensive Systematic Review of YOLO for Medical Object Detection (2018 to 2023). TechRxiv; 2023; [DOI: https://dx.doi.org/10.36227/techrxiv.23681679.v1]
38. Inui, A.; Mifune, Y.; Nishimoto, H.; Mukohara, S.; Fukuda, S.; Kato, T.; Furukawa, T.; Tanaka, S.; Kusunose, M.; Takigami, S. et al. Detection of elbow OCD in the ultrasound image by artificial intelligence using YOLOv8. Appl. Sci.; 2023; 13, 7623. [DOI: https://dx.doi.org/10.3390/app13137623]
39. Kufel, J.; Bargieł-Łączek, K.; Koźlik, M.; Czogalik, Ł.; Dudek, P.; Magiera, M.; Bartnikowska, W.; Lis, A.; Paszkiewicz, I.; Kocot, S. et al. Chest X-ray Foreign Objects Detection Using Artificial Intelligence. J. Clin. Med.; 2023; 12, 5841. [DOI: https://dx.doi.org/10.3390/jcm12185841] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37762783]
40. Terzi, D.S.; Azginoglu, N. In-Domain Transfer Learning Strategy for Tumor Detection on Brain MRI. Diagnostics; 2023; 13, 2110. [DOI: https://dx.doi.org/10.3390/diagnostics13122110]
41. Longo, U.G.; De Salvatore, S.; Zollo, G.; Calabrese, G.; Piergentili, I.; Loppini, M.; Denaro, V. Magnetic resonance imaging could precisely define the mean value of tendon thickness in partial rotator cuff tears. BMC Musculoskelet. Disord.; 2023; 24, 718. [DOI: https://dx.doi.org/10.1186/s12891-023-06756-5]
42. Kim, H.; Shin, K.; Kim, H.; Lee, E.S.; Chung, S.W.; Koh, K.H.; Kim, N. Can deep learning reduce the time and effort required for manual segmentation in 3D reconstruction of MRI in rotator cuff tears?. PLoS ONE; 2022; 17, e0274075. [DOI: https://dx.doi.org/10.1371/journal.pone.0274075]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
This study aimed to develop a screening model for rotator cuff tear detection in all three planes of routine shoulder MRI using a deep neural network. A total of 794 shoulder MRI scans (374 men and 420 women; aged 59 ± 11 years) were utilized. Three musculoskeletal radiologists labeled the rotator cuff tear. The YOLO v8 rotator cuff tear detection model was then trained; training was performed with all imaging planes simultaneously and with axial, coronal, and sagittal images separately. The performances of the models were evaluated and compared using receiver operating curves and the area under the curve (AUC). The AUC was the highest when using all imaging planes (0.94; p < 0.05). Among a single imaging plane, the axial plane showed the best performance (AUC: 0.71), followed by the sagittal (AUC: 0.70) and coronal (AUC: 0.68) imaging planes. The sensitivity and accuracy were also the highest in the model with all-plane training (0.98 and 0.96, respectively). Thus, deep-learning-based automatic rotator cuff tear detection can be useful for detecting torn areas in various regions of the rotator cuff in all three imaging planes.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details






1 Department of Radiology, Korea University Anam Hospital, Korea University College of Medicine, Seoul 02841, Republic of Korea
2 Department of Radiology, Korea University Anam Hospital, Korea University College of Medicine, Seoul 02841, Republic of Korea
3 Institute for Healthcare Service Innovation, College of Medicine, Korea University, Seoul 02841, Republic of Korea;
4 JLK Inc., Seoul 06141, Republic of Korea
5 Department of Radiology, Korea University Anam Hospital, Korea University College of Medicine, Seoul 02841, Republic of Korea