It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
Myopic maculopathy (MM), also known as myopic macular degeneration, is the most serious, irreversible, vision-threatening complication and the leading cause of visual impairment and blindness. Numerous research studies demonstrate that the convolutional neural network (CNN) outperforms many applications. Current CNN designs employ a variety of techniques, such as fixed convolutional kernels, the absolute value layer, data augmentation, and domain knowledge, to enhance performance. However, some network structure designing hasn't received much attention yet. The intricacy of the MM categorization and definition system makes it challenging to employ deep learning (DL) technology in the diagnosis of pathologic myopia lesions. To increase the detection precision of MM's spatial domain, the proposed work first concentrates on creating a novel CNN network structure then improve the convolution kernels in the preprocessing layer. The number of parameters is decreased, and the characteristic of a small local region is modeled using the smaller convolution kernels. Next channel correlation of the residuals with separable convolutions is employed to compress the image features. Then, the local features using the spatial pyramid pooling (SPP) technique is combined, which improves the features' capacity to be represented by multi-level pooling. The use of data augmentation is the final step in enhancing network performance. Compress the residuals in this paper to make use of the channel correlation. The accuracy achieved by the model was 95%, F1-score of 96.5% and AUC of 0.92 on augmented MM-PALM dataset. The paper concludes by conducting a comparative study of various deep-learning architectures. The findings highlight that the hybrid CNN with SPP and XgBoost (Depthwise-XgBoost) architecture is the ideal deep learning classification model for automated detection of four stages of MM.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer