1. Introduction
A corneal ulcer is a type of illness in the cornea; it comes from infection or injury and leads to ocular morbidity [1,2]. The likelihood of vision impairment is decreased by early identification and differentiation of various ulcer conditions. Slit-lamp imaging techniques used in conventional clinical procedures can be tedious, costly, and time-consuming. The following issues make it challenging to appropriately segment corneal ulcers: significant discrepancies in the pathological morphologies of point-flaky and flaky corneal ulcers, hazy border, noise interference, and a dearth of reliable ground-truth slit-lamp pictures. To recognize and quantify corneal ulcers from ocular staining pictures, various segmentation procedures are needed. Due to the varied sizes and forms of point-flaky mixed corneal ulcers and flaky corneal ulcers, it is difficult to segment them in a slit-lamp picture. The lack of high-quality datasets for both corneal ulcers and their ground truth segment, particularly for supervised learning-based segmentation algorithms, has hampered the development of such systems [3,4]. Corneal segmentation is the first step for diagnosing and assessing ocular surface damage. Therefore, extracting information from fluorescein images is a big challenge for specialists. However, the automated method may help the specialist in localizing and extracting the corneal ulcer region for further assessment. This paper proposed two methods for corneal ulcer segmentation: image processing techniques and semantic segmentation using deep learning. Section 2 is devoted to the most recent studies on ulcer segmentation approaches.
2. Review of the Study
In 2018, Lijie Deng et al. proposed a pipeline for automatically extracting corneal ulcers that uses machine learning and image processing techniques based on fluorescein staining images. Each image was segmented using simple linear iterative clustering, and a support vector machine discriminated between the two classes, followed by erosion and dilation procedures to polish the images. The suggested method achieved a mean accuracy of 98.4%, significantly outperforming Otsu thresholding and active contour techniques. The problem with this study is that the suggested model is semiautomatic since it uses manually labeled landmarks [5]. In 2019, Zhenrong Liu et al. developed an automatic pipeline for segmenting flaky corneal ulcers from fluorescein staining images. They employed a combination of Gaussian Mixture Models (GMM) and Otsu thresholding. They employed the HSV color space, and the number of Gaussian was determined using information theory. The model was validated using 150 images and achieved a Dice similarity coefficient of 0.88 [6].
In 2020, Jessica Loo et al. developed SLIT-Net, an automatic algorithm for the segmentation of microbial keratitis biomarkers under two different illuminations. SLIT-Net segments and identifies four pathological ROIs on diffuse white light images, one pathological ROI on diffuse blue light images, and two pathological ROIs on all images. The model was tested using manually annotated slit lamp photographs from 133 eyes. They used seven-fold cross-validation and achieved a Dice that ranged between 0.62–0.95 for all ROIs [7]. Additionally, in 2020, Pablo Lima et al. suggested a semiautomatic approach using supervised machine learning and image processing techniques to segment corneal lesions. They evaluated the multi-layer perceptron, SVM, K-nearest neighbors, and random forest algorithms. Random forest outperformed all other algorithms, achieving a Dice similarity of 0.85 and an accuracy of 99.08% [8]. Finally, Junyan Lyn et al. proposed a novel transfer learning-based model for corneal segmentation using 712 images from the publicly available SUSTech-SYSU dataset. The suggested model contained an encoder-decoder with an Xception feature extractor using atrous spatial pyramid pooling. The proposed method achieved a Dice score of 0.9582, 97.63% accuracy, and 95.37% sensitivity [9].
In 2021, Veena Mayya et al. [10] developed a multi-scale convolutional neural network (MS-CNN) for accurate corneal segmentation. The suggested model consisted of a deep neural pipeline to automatically segment images followed by a ResNeXt for differentiation. The authors successfully detected fungal keratitis with an 88.96% accuracy using 133 images from the Loo et al. dataset [7]. Additionally, in 2021, Tingting Wang et al. proposed a novel Corneal Ulcer Segmentation Network (CU-SegNet) to segment corneal ulcers with different shapes and sizes in fluorescein images. They used a U-shape encoder-decoder structure and two novel modules. To demonstrate their network effectiveness, the proposed network was evaluated on the SUSTech-SYSU dataset and achieved a Dice coefficient of 0.8914 [11]. To improve the segmentation accuracy further, in 2022, the same research group developed a novel semi-supervised multi-scale self-transformer Generative Adversarial Network (Semi-MsST-GAN) for corneal ulcer segmentation in slit lamp images. Again, they evaluated their model using the SUSTech-SYSU dataset and achieved better segmentation performance than the state-of-the-art CNN-based methods. However, the limited number of slit lamp images available for training and evaluation represents a limitation for both studies [12].
This paper compares the effectiveness of employing image processing techniques and deep learning approaches on corneal ulcer region segmentation. Section 3 presents the two proposed methods, while Section 4 illustrates the results and discusses the performance of each method in terms of accuracy, sensitivity, and specificity. On the other hand, Section 5 is devoted to the conclusion and future work.
3. Materials and Methods
This paper proposes two methods for the automatic segmentation of corneal ulcers. The first method is image processing techniques, and the second is the semantic segmentation method. The dataset utilized in this paper is the publicly available SUSTech-SYSU database [13,14,15]. The dataset consists of 712 fluorescein-stained images that acquired the ocular surface region for patients with different corneal ulcer disease levels. In addition, there are 354 images labeled where the corneal ulcer region is localized. The labeled images are used for evaluating both methods. On top of that, they are used for building deep learning models in the semantic segmentation procedure. The corresponding sections clarify the proposed methods.
3.1. Image Processing with Hough Transform
The first method utilizes the benefits of image processing techniques with the Hough transform to segment the corneal ulcer region. The designed method is shown in Figure 1.
The corneal ulcer region segmentation system proposed in this work is fully automated. Segmentation of the corneal ulcer regions from the whole RGB eye image undergoes several stages. First, the image is subjected to preprocessing stage by initially excluding most unwanted details from the image, particularly the specular reflection region. This is performed by taking the blue part of the image, then squaring its pixel values and binarizing the output. Next, we applied the morphological operation of closing, followed by calculating its complement, as illustrated in Figure 2, for one of the corneal ulcer image datasets, as an example.
The binary image shown in Figure 2b was then multiplied by the green part of the original-colored image after smoothing using a Gaussian filter, which gives the output shown in Figure 3a. The pixel values are then squared and binarized to give the image shown in Figure 3b.
Next, designing an ellipse mask with proper semi-minor and major axis and centroid coordinates is similar to the binary image shown in Figure 3b. The mask shown in Figure 3a is used to exclude most of the unwanted details by multiplying the mask with the binary image shown in Figure 2b, which then gives the image shown in Figure 4b. The final step of the preprocessing stage is performing a thinning operation on the image shown in Figure 4b, which gives the image shown in Figure 5.
In general, the eye contour extraction shown in Figure 5 is insufficiently accurate due to the many details in the eye image. To make a better delineation of the eye border, we performed the second stage, which is eye border recognition using a proper eye border mathematical model, and then used a proper recognition algorithm. Hough transform was used as a parametric shape recognition algorithm, where the eye border parametric shape was generated using a closed mathematical formula introduced by Johan Gielis, namely the Superformula [16]. It models curves called Gielis curves, as described by the polar coordinate, , in the corresponding equation
(1)
where r is the radial distance to the origin, is the polar angle, and the rational number m is the value of rotational symmetry. The exponents , and are introduced, which, with the m parameter, allow a greater degree of freedom and enable the Superformula equation to represent several useful shapes. The chosen parameters for mimicking the eye border are 1, 1, 1, and 2 for , , , and m, respectively, which gives the shape shown in Figure 6a. To determine the iris region, where the cornea is positioned directly in front of the iris and pupil, a disk is designed with a diameter and centroid equal to the semi-minor and centers of the eye-recognized shape respectively, as shown in Figure 5b. By applying this concept to the eye image border and cornea region in the adopted corneal ulcer image sample, we get the output shown in Figure 6 and Figure 7, respectively. Next, the ulcer region of interest is separated by multiplying the mask shown in image Figure 2b with the image shown in Figure 8a to get the image shown in Figure 8b.The pixel values of the green part of the image shown in Figure 9b are squared and binarized, yielding the image shown in Figure 10a. The mask segments shown are tested in the segmentation system. Provided the segment is connected to the eye border in which its semi-major to semi-minor ratio is greater than a certain threshold, it will be considered as an accumulation of the fluorescein stain at the eyelids. It will then be excluded from the final ulcer regions result, as shown in Figure 10b. Finally, the original image will be masked with the remaining mask segments, as in the result shown in Figure 11.
3.2. Semantic Segmentation
The second method that is proposed in this paper is semantic segmentation. Figure 12 demonstrates the steps for automated segmentation using a deep learning model.
As stated in Figure 12, the system splits the dataset (images and their labels) into training and test partitions. The pre-trained convolutional network in this paper is ResNet 18 [15,17]. The pre-trained CNN model was trained and evaluated on the test data.
Semantic segmentation divides image pixels into one or more semantically interpretable classes rather than real-world objects. Region proposal and annotation is the process of categorizing pixel values into distinct groups using CNN. Candidate object patches (COMPs) are small groups of pixels that most likely belong to the same object as region proposals.
The semantic segmentation procedure is started by the encoder network and followed by the decoder network. The encoder is typically a pre-trained network such as ResNets, which is followed by a decoder network. The type of ResNet used in this paper is the Resnet-18 model that won the 2016 ImageNet competition. It is well-known due to its depth and use of residual blocks [18]. These blocks are essential for solving obstacle issues in training by introducing identity skip connections, which allow layers to copy their inputs to the next layer [19].
To create a segmentation map, encoders may be convolutional neural networks, and decoders may be based on deconvolutional or transposed neural networks [20,21]. Figure 13 describes the procedure of semantic segmentation, which is based mainly on the deep learning approach [22]. The corresponding figure illustrates that the input image passes through a trained deep-learning model to end by the localization of the ulcer region.
The pre-trained ResNet18 was used, and the data were divided into 70% training and 30% testing. The images were resized to 224 × 224 × 3 to match the input requirements for the first layer in ResNet18. The model was trained using MATLAB® with a single CPU. The hyper-parameters are the Adam optimization method besides the initial learning rate of 0.0001, with a minibatch size of 32 and a maximum epoch of 50.
4. Results and Discussion
Both methods are applied to the whole dataset, trained, validated, and tested to localize ulcer regions in the cornea.
4.1. Image Processing and Hough Transform
The method is applied to whole images. Figure 14, Figure 15, Figure 16, Figure 17 and Figure 18 depict some of the obtained results for different shapes of ulcer regions. Each figure illustrates the original image, the segmentation output, and its corresponding ground truth.
The examples of figures from Figure 14, Figure 15, Figure 16, Figure 17 and Figure 18 illustrate the output of the first proposed method. All figures describe the ability of the proposed method to localize the ulcer region with high similarities to the ground truth. Similarity indices are calculated for each case, such as the Jaccard similarity index and intersection union unit (IOU). The similarities indices are almost 100% for all presented images except the image in Figure 16. As shown in Figure 16, the method was sensitive to the bottom region of the eye to detect ulcer region that is not presented in the ground truth. In this case, the Jaccard and IOU indices are too low. However, the proposed method may have the capability to distinguish ulcer regions from other eye regions more than manual segmentation.
4.2. Semantic Segmentation
After training the model on 70% of the whole dataset, accuracy, sensitivity, and specificity were calculated for the training and test stages. The accuracy reveals the percentage of correctly classified pixels to all over pixels. Table 1 describes the results of sensitivity, accuracy, and specificity of semantic deep learning segmentation for both training and test stages [23,24,25,26,27,28,29,30,31].
The proposed method is applied to the dataset. The following Figure 19, Figure 20, Figure 21, Figure 22 and Figure 23 illustrate the output of the deep learning model. Each figure shows the original image and its corresponding ulcer region that is localized by the deep learning model.
Figure 19, Figure 20, Figure 21, Figure 22 and Figure 23 illustrate how sensitive the model is to the ulcer region. In addition, the time required for each test image is less than 1 s, implying that the second proposed method is accurate, sensitive, and fast after building the AI model.
The comparison is performed between the two methods in terms of sensitivity, accuracy, specificity, Jaccard index, and Dice similarity. The Jaccard index expresses the division of true classified pixels over the sum of the number of ground truth pixel and the predicted pixels. It is also defined as intersection of union (IOU), as is clear in the corresponding equation [31]:
On the other hand, the Dice similarity defines as two times the area of intersection divided by the sum of the number of pixels predicted and the number of ground truth pixels, and it can be defined as F1 score. The corresponding equation reveals the relation [31]:
All evaluated matrices are carried out on the same test data, which is formed by 30% of the whole dataset. The number of test data is 107 images. Table 2 depicts the performance of each method on the same images.
Table 2 abstracted the results for both methods and its conclusion of the benefit of deep learning techniques on the traditional image processing tools. In terms of accuracy, specificity, and Jaccard similarity, the second approach is higher than the first one. However, it is less sensitive than the first method. Additionally, the IOU is lower than the image processing proposed method. That comes from the truth; the deep learning approach needs a large dataset to obtain a robust and highly sensitive one by optimizing its training parameters. On the other hand, the time required for the second approach is less than the first approach where the first method requires almost 30s to detect the ulcer region whereas the second strategy is just 1 s for a single test image. Therefore, the second method can be the promised approach for ulcer segmentation in the medical field. Furthermore, building a sensible and reliable model requires training the semantic model on a large dataset.
Figure 24 describes the performance of each method. Both methods are effective as shown in the corresponding figures. Their IOU and Dice similarity are almost the same. Based on the experiment which is carried out in this paper, the time required to segment ulcers in a single image using AI is just 1 s, where using image processing needs 30 s.
This study compared with literature that used the same dataset. Table 3 describes the performance of both methods in terms of accuracy, sensitivity, specificity, and Dice index.
As illustrated in Table 3, both methods are effective and influence ulcer detection.
5. Conclusions
A corneal ulcer is commonly a corneal disease. It causes ocular morbidity due to injury or infection by bacteria, viral, or parasites. Ulcer early diagnosis decreases vision impairment chance. Employing slit-lamp imaging techniques in clinics can be tedious, expensive, and time-consuming. Localization of ulcer regions in slit-lamp images influences the level of diagnoses.
Manual detection needs highly expert physicians, and it is not accurate. Automated segmentation of the corneal ulcer region develops the assessment method and helps diagnose accurately.
This paper proposed two methods to extract the ulcer region automatically. The first approach utilizes image processing techniques with Hough transform to localize the corneal ulcer-affected segment. The second approach is designed based on deep learning algorithms. The two methods are trained and evaluated in terms of performance matrices: accuracy, sensitivity, specificity, Jaccard similarity, Dice similarity, and IOU. The results show the effectiveness of both methods in accuracy, but deep learning is more accurate than image processing. However, image processing is more sensitive to ulcer regions, whereas the deep learning method has higher specificity. This study recommends exploiting the properties of image processing algorithms and artificial intelligence (AI) to guide the residents in extracting the affected ulcer region.
The sensitivity of the AI model can be enhanced using a large dataset to achieve a more sensitive, reliable, and robust model. The two approaches leverage finding appropriate treatment based on the assessment report, which decreases the probability of reaching the visually impaired.
Conceptualization I.A.Q., H.A., Y.A.-I. and M.A.; methodology, I.A.Q., H.A., A.Z., Y.A.-I. and M.A.; software, I.A.Q., H.A., M.A. and A.Z.; validation, H.A., Y.A.-I., W.A.M. and M.A.; formal analysis, Y.A.-I., H.A. and M.A.; writing—original draft preparation, I.A.Q., Y.A.-I., H.A. and W.A.M.; writing—review and editing, I.A.Q., H.A., W.A.M., M.A., Y.A.-I. and A.Z.; and visualization, H.A. and A.Z.; supervision, I.A.Q., H.A. and W.A.M.; project administration, I.A.Q., H.A., W.A.M. and Y.A.-I. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
The dataset that has been analyzed in this study was derived from the following public domain resource SUSTech-SYSU dataset. Available online:
The authors would thank the authors of the dataset for making it available online.
The authors declare that they have no conflict of interest to report regarding the present study.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 3. (a). The green part smoothed image after masking (b). The binarized image of (a).
Figure 4. (a) The ellipse mask, (b) the binarized image of Figure 3, (b) after closing and masking with ellipse mask.
Figure 6. (a). Eye border model using Superformula with proper parameters. (b). The enclosed disk should separate the cornea region.
Figure 7. (a). The eye border recognition using the Superformula shape model and Hough transform, (b). The enclosed recognized eye border with the original image for illustration.
Figure 8. (a) The filled recognized eye border in the adopted example. (b) The enclosed disk is used as a mask to separate the cornea region.
Figure 9. (a). Separation of the cornea region using the recognized disk mask. (b) The separated cornea region after masking with a specular reflection mask.
Figure 10. (a). Two potential corneal ulcer segments mask (b). The mask’s segment connected to the recognized eye border with a ratio of semi-major to semi-minor is greater than the predefined threshold being excluded.
Figure 11. (a) The original image. (b) After masking the original image with the corneal ulcer mask shown in the image of Figure 10b.
Figure 14. Example 1: Comparison between segmentation output and the ground truth (a) original image, (b) the segmentation of ulcer region using the first proposed method, (c) and the ground truth of the corresponding input image.
Figure 15. Example 2: Comparison between segmentation output and the ground truth (a) original image, (b) the segmentation of ulcer region using the first proposed method, (c) and the ground truth of the corresponding input image.
Figure 16. Example 3: Comparison between segmentation output and the ground truth (a) original image, (b) the segmentation of ulcer region using the first proposed method, (c) and the ground truth of the corresponding input image.
Figure 16. Example 3: Comparison between segmentation output and the ground truth (a) original image, (b) the segmentation of ulcer region using the first proposed method, (c) and the ground truth of the corresponding input image.
Figure 17. Example 4: Comparison between segmentation output and the ground truth (a) original image, (b) the segmentation of ulcer region using the first proposed method, (c) and the ground truth of the corresponding input image.
Figure 18. Example 5: Comparison between segmentation output and the ground truth (a) original image, (b) the segmentation of ulcer region using the first proposed method, (c) and the ground truth of the corresponding input image.
Figure 19. Example 1: Semantic segmentation approach, (a) the original image, (b) the segmentation output, (c) and the ground truth of the corresponding input image.
Figure 20. Example 2: Semantic segmentation approach, (a) the original image, (b) the segmentation output, (c) and the ground truth of the corresponding input image.
Figure 21. Example 3: Semantic segmentation approach, (a) the original image, (b) the segmentation output, (c) and the ground truth of the corresponding input image.
Figure 21. Example 3: Semantic segmentation approach, (a) the original image, (b) the segmentation output, (c) and the ground truth of the corresponding input image.
Figure 22. Example 4: Semantic segmentation approach, (a) the original image, (b) the segmentation output, (c) and the ground truth segment.
Figure 23. Example 5: Semantic segmentation approach, (a) the original image, (b) the segmentation output, (c) and the ground truth segment.
Performance of semantic deep learning segmentation.
Global Accuracy | Specificity | Sensitivity | |
---|---|---|---|
Training Phase | 99.75% | 99.84% | 96.77% |
Test Phase | 98.8% | 99.3% | 83.5% |
Comparison between two proposed methods over the test dataset (30% of whole data).
Method | Global Accuracy | Specificity | Sensitivity | Jaccard Similarity | Dice Similarity |
---|---|---|---|---|---|
Image Processing Techniques Method | 98.7% | 63.4% | 99.4% | 98.64% | 98.9% |
Deep Learning Method | 98.8% | 99.3% | 83.5% | 98.655% | 99.3% |
Comparison of the proposed method with previous studies.
Study | Accuracy | Sensitivity | Specificity | Dice Index |
---|---|---|---|---|
[ |
88.96% | 90.67% | 87.57% | 88.01% |
[ |
- | 89.65% | 99.7% | 89.14% |
[ |
- | 91.9% | 90.93% | |
This Study (1st method) | 97.97% | 99.8% | 63.4% | |
This study (2nd method) | 98.9% | 83.5 | 99.3% |
References
1. Alhajraf, K.; Lin, S.R.; Jacobs, D.S. A corneal ring ulcer. Am. J. Ophthalmol. Case Rep.; 2020; 20, 100856. [DOI: https://dx.doi.org/10.1016/j.ajoc.2020.100856] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32875150]
2. Mansoor, H.; Tan, H.C.; Lin, M.T.-Y.; Mehta, J.S.; Liu, Y.-C. Diabetic Corneal Neuropathy. J. Clin. Med.; 2020; 9, 3956. [DOI: https://dx.doi.org/10.3390/jcm9123956] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33291308]
3. Akram, A.; Debnath, R. An Efficient Automated Corneal Ulcer Detection Method using Convolutional Neural Network. Proceedings of the 2019 22nd International Conference on Computer and Information Technology (ICCIT); Dhaka, Bangladesh, 18–20 December 2019; [DOI: https://dx.doi.org/10.1109/ICCIT48885.2019.9038389]
4. Im, J.; Kim, D. Corneal Ulcers Detection Using Random Seed Appointment Algorithm. J. Inst. Electron. Inf. Eng.; 2019; 56, pp. 53-66. [DOI: https://dx.doi.org/10.5573/ieie.2019.56.9.53]
5. Deng, L.; Huang, H.; Yuan, J.; Tang, X. Superpixel-based automatic segmentation of corneal ulcers from ocular staining images. Proceedings of the 2018 IEEE 23rd International Conference on Digital Signal Processing (DSP); Shanghai, China, 19–21 November 2018; pp. 1-5.
6. Liu, Z.; Shi, Y.; Zhan, P.; Zhang, Y.; Gong, Y.; Tang, X. Automatic corneal ulcer segmentation combining Gaussian mixture modeling and Otsu method. Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); Berlin, Germany, 23–27 July 2019; pp. 6298-6301.
7. Loo, J.; Kriegel, M.F.; Tuohy, M.M.; Kim, K.H.; Prajna, V.; Woodward, M.A.; Farsiu, S. Open-source automatic segmentation of ocular structures and biomarkers of microbial keratitis on slit-lamp photography images using deep learning. IEEE J. Biomed. Health Inform.; 2020; 25, pp. 88-99. [DOI: https://dx.doi.org/10.1109/JBHI.2020.2983549]
8. Lima, P.V.; de MSVeras, R.; Vogado, L.H.; Portela, H.M.; de Almeida, J.D.; Aires, K.R.; Leite, D. A semiautomatic segmentation approach to corneal lesions. Comput. Electr. Eng.; 2020; 84, 106625. [DOI: https://dx.doi.org/10.1016/j.compeleceng.2020.106625]
9. Lyu, J.; Qiu, J.; Deng, L.; Zhang, Y.; Ye, T.T.T.; Tang, X. Transfer Learning for Automatic Cornea Segmentation based on Ocular Staining Images. Proceedings of the Fourth International Symposium on Image Computing and Digital Medicine; Shenyang China, 5–7 December 2020; pp. 108-111.
10. Mayya, V.; Kamath Shevgoor, S.; Kulkarni, U.; Hazarika, M.; Barua, P.D.; Acharya, U.R. Multi-scale convolutional neural network for accurate corneal segmentation in early detection of fungal keratitis. J. Fungi; 2021; 7, 850. [DOI: https://dx.doi.org/10.3390/jof7100850]
11. Wang, T.; Zhu, W.; Wang, M.; Chen, Z.; Chen, X. Cu-Segnet: Corneal Ulcer Segmentation Network. Proceedings of the 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI); Nice, France, 13–16 April 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1518-1521.
12. Wang, T.; Wang, M.; Zhu, W.; Wang, L.; Chen, Z.; Peng, Y.; Chen, X. Semi-MsST-GAN: A Semi-Supervised Segmentation Method for Corneal Ulcer Segmentation in Slit-Lamp Images. Front. Neurosci.; 2021; 15, 1705. [DOI: https://dx.doi.org/10.3389/fnins.2021.793377]
13. Deng, L.; Lyu, J.; Huang, H.; Deng, Y.; Yuan, J.; Tang, X. The SUSTech-SYSU dataset for automatically segmenting and classifying corneal ulcers. Sci. Data; 2020; 7, 23. [DOI: https://dx.doi.org/10.1038/s41597-020-0360-7]
14. Wang, Z.; Lyu, J.; Luo, W.; Tang, X. Adjacent Scale Fusion and Corneal Position Embedding for Corneal Ulcer Segmentation. Ophthalmic Medical Image Analysis. OMIA 2021. Lecture Notes in Computer Science; Fu, H.; Garvin, M.K.; MacGillivray, T.; Xu, Y.; Zheng, Y. Springer: Cham, Switzerland, 2021; Volume 12970.
15. Alquran, H.; Al-Issa, Y.; Alsalatie, M.; Mustafa, W.A.; Qasmieh, I.A.; Zyout, A. Intelligent Diagnosis and Classification of Keratitis. Diagnostics; 2022; 12, 1344. [DOI: https://dx.doi.org/10.3390/diagnostics12061344]
16. Gielis, J. A generic geometric transformation that unifies a wide range of natural and abstract shapes. Am. J. Bot.; 2003; 90, pp. 333-338. [DOI: https://dx.doi.org/10.3732/ajb.90.3.333]
17. Alquran, H.; Mustafa, W.A.; Qasmieh, I.A.; Yacob, Y.M.; Alsalatie, M.; Al-Issa, Y.; Alqudah, A.M. Cervical Cancer Classification Using Combined Machine Learning and Deep Learning Approach. CMC-Comput. Mater. Contin.; 2022; 72, pp. 5117-5134. [DOI: https://dx.doi.org/10.32604/cmc.2022.025692]
18. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition; Las Vegas, NV, USA, 27–30 June 2016; pp. 770-778.
19. Zhou, Q.; Zhu, W.; Li, F.; Yuan, M.; Zheng, L.; Liu, X. Transfer Learning of the ResNet-18 and DenseNet-121 Model Used to Diagnose Intracranial Hemorrhage in CT Scanning. Curr. Pharm. Des.; 2022; 28, pp. 287-295. [DOI: https://dx.doi.org/10.2174/1381612827666211213143357] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34961458]
20. Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. Proceedings of the European Conference on Computer Vision (ECCV); Munich, Germany, 8–14 September 2018; pp. 801-818.
21. Brostow, G.J.; Fauqueur, J.; Cipolla, R. Semantic object classes in video: A high-definition ground truth database. Pattern Recognit. Lett.; 2009; 30, pp. 88-97. [DOI: https://dx.doi.org/10.1016/j.patrec.2008.04.005]
22. Shah, M. Semantic Segmentation Using Fully Convolutional Networks Over the Years. Meet Shah Blog Website. 2017; Available online: https://meetshah1995.github.io/semantic-segmentation/deep-learning/pytorch/visdom/2017/06/01/semantic-segmentation-over-the-years.html (accessed on 15 September 2022).
23. Madani, A.; Namazi, B.; Altieri, M.S.; Hashimoto, D.A.; Rivera, A.M.; Pucher, P.H.; Alseidi, A. Artificial intelligence for intraoperative guidance: Using semantic segmentation to identify surgical anatomy during laparoscopic cholecystectomy. Ann. Surg.; 2022; 276, pp. 363-369. [DOI: https://dx.doi.org/10.1097/SLA.0000000000004594] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33196488]
24. Irfan, R.; Almazroi, A.A.; Rauf, H.T.; Damaševičius, R.; Nasr, E.A.; Abdelgawad, A.E. Dilated semantic segmentation for breast ultrasonic lesion detection using parallel feature fusion. Diagnostics; 2021; 11, 1212. [DOI: https://dx.doi.org/10.3390/diagnostics11071212]
25. Khalifa, N.E.M.; Manogaran, G.; Taha, M.H.N.; Loey, M. A deep learning semantic segmentation architecture for COVID-19 lesions discovery in limited chest CT datasets. Expert Syst.; 2022; 39, e12742. [DOI: https://dx.doi.org/10.1111/exsy.12742]
26. Tiwari, T.; Saraswat, M. A new modified-unet deep learning model for semantic segmentation. Multimed. Tools Appl.; 2022; pp. 1-21. [DOI: https://dx.doi.org/10.1007/s11042-022-13230-2]
27. Ruiz-Santaquiteria, J.; Bueno, G.; Deniz, O.; Vallez, N.; Cristobal, G. Semantic versus instance segmentation in microscopic algae detection. Eng. Appl. Artificial Intell.; 2020; 87, 103271. [DOI: https://dx.doi.org/10.1016/j.engappai.2019.103271]
28. Sambyal, N.; Saini, P.; Syal, R.; Gupta, V. Modified U-Net architecture for semantic segmentation of diabetic retinopathy images. Biocybern. Biomed. Eng.; 2020; 40, pp. 1094-1109. [DOI: https://dx.doi.org/10.1016/j.bbe.2020.05.006]
29. Kar, J.; Cohen, M.V.; McQuiston, S.P.; Malozzi, C.M. A deep-learning semantic segmentation approach to fully automated MRI-based left-ventricular deformation analysis in cardiotoxicity. Magn. Reson. Imaging; 2021; 78, pp. 127-139. [DOI: https://dx.doi.org/10.1016/j.mri.2021.01.005]
30. Nurmaini, S.; Tama, B.A.; Rachmatullah, M.N.; Darmawahyuni, A.; Sapitri, A.I.; Firdaus, F.; Tutuko, B. An improved semantic segmentation with region proposal network for cardiac defect interpretation. Neural Comput. Appl.; 2022; 3, pp. 13937-13950. [DOI: https://dx.doi.org/10.1007/s00521-022-07217-1]
31. Harkat, H.; Nascimento, J.; Bernardino, A. Fire segmentation using a DeepLabv3+ architecture. Image and Signal Processing for Remote Sensing XXVI; SPIE: Bellingham, WA, USA, 2020; Volume 11533, pp. 134-145.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
A corneal ulcers are one of the most common eye diseases. They come from various infections, such as bacteria, viruses, or parasites. They may lead to ocular morbidity and visual disability. Therefore, early detection can reduce the probability of reaching the visually impaired. One of the most common techniques exploited for corneal ulcer screening is slit-lamp images. This paper proposes two highly accurate automated systems to localize the corneal ulcer region. The designed approaches are image processing techniques with Hough transform and deep learning approaches. The two methods are validated and tested on the publicly available SUSTech-SYSU database. The accuracy is evaluated and compared between both systems. Both systems achieve an accuracy of more than 90%. However, the deep learning approach is more accurate than the traditional image processing techniques. It reaches 98.9% accuracy and Dice similarity 99.3%. However, the first method does not require parameters to optimize an explicit training model. The two approaches can perform well in the medical field. Moreover, the first model has more leverage than the deep learning model because the last one needs a large training dataset to build reliable software in clinics. Both proposed methods help physicians in corneal ulcer level assessment and improve treatment efficiency.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details





1 Biomedical Systems and Medical Informatics Engineering, Yarmouk University, Irbid 21163, Jordan
2 Department of Computer Engineering, Yarmouk University, Irbid 21163, Jordan
3 Faculty of Electrical Engineering & Technology, Campus Pauh Putra, Universiti Malaysia Perlis (UniMAP), Arau, Perlis 02600, Malaysia; Advanced Computing (AdvComp), Centre of Excellence (CoE), Campus Pauh Putra, Universiti Malaysia Perlis (UniMAP), Arau, Perlis 02600, Malaysia
4 The Institute of Biomedical Technology, King Hussein Medical Center, Royal Jordanian Medical Service, Amman 11855, Jordan