It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
Glaucoma disease causes irreversible damage to the optical nerve and it has the potential to cause permanent loss of vision. Glaucoma ranks as the second most prevalent cause of permanent blindness. Traditional glaucoma diagnosis requires a highly experienced specialist, costly equipment, and a lengthy wait time. For automatic glaucoma detection, state-of-the-art glaucoma detection methods include a segmentation-based method to calculate the cup-to-disc ratio. Other methods include multi-label segmentation networks and learning-based methods and rely on hand-crafted features. Localizing the optic disc (OD) is one of the key features in retinal images for detecting retinal diseases, especially for glaucoma disease detection. The approach presented in this study is based on deep classifiers for OD segmentation and glaucoma detection. First, the optic disc detection process is based on object detection using a Mask Region-Based Convolutional Neural Network (Mask-RCNN). The OD detection task was validated using the Dice score, intersection over union, and accuracy metrics. The OD region is then fed into the second stage for glaucoma detection. Therefore, considering only the OD area for glaucoma detection will reduce the number of classification artifacts by limiting the assessment to the optic disc area. For this task, VGG-16 (Visual Geometry Group), Resnet-18 (Residual Network), and Inception-v3 were pre-trained and fine-tuned. We also used the Support Vector Machine Classifier. The feature-based method uses region content features obtained by Histogram of Oriented Gradients (HOG) and Gabor Filters. The final decision is based on weighted fusion. A comparison of the obtained results from all classification approaches is provided. Classification metrics including accuracy and ROC curve are compared for each classification method. The novelty of this research project is the integration of automatic OD detection and glaucoma diagnosis in a global method. Moreover, the fusion-based decision system uses the glaucoma detection result obtained using several convolutional deep neural networks and the support vector machine classifier. These classification methods contribute to producing robust classification results. This method was evaluated using well-known retinal images available for research work and a combined dataset including retinal images with and without pathology. The performance of the models was tested on two public datasets and a combined dataset and was compared to similar research. The research findings show the potential of this methodology in the early detection of glaucoma, which will reduce diagnosis time and increase detection efficiency. The glaucoma assessment achieves about 98% accuracy in the classification rate, which is close to and even higher than that of state-of-the-art methods. The designed detection model may be used in telemedicine, healthcare, and computer-aided diagnosis systems.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer