Full text

Turn on search term navigation

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Visual acuity (VA) is a measure of the ability to distinguish shapes and details of objects at a given distance and is a measure of the spatial resolution of the visual system. Vision is one of the basic health indicators closely related to a person’s quality of life. It is one of the first basic tests done when an eye disease develops. VA is usually measured by using a Snellen chart or E-chart from a specific distance. However, in some cases, such as the unconsciousness of patients or diseases, i.e., dementia, it can be impossible to measure the VA using such traditional chart-based methodologies. This paper provides a machine learning-based VA measurement methodology that determines VA only based on fundus images. In particular, the levels of VA, conventionally divided into 11 levels, are grouped into four classes and three machine learning algorithms, one SVM model and two CNN models, are combined into an ensemble method in order to predict the corresponding VA level from a fundus image. Based on a performance evaluation conducted using randomly selected 4000 fundus images, we confirm that our ensemble method can estimate with 82.4% of the average accuracy for four classes of VA levels, in which each class of Class 1 to Class 4 identifies the level of VA with 88.5%, 58.8%, 88%, and 94.3%, respectively. To the best of our knowledge, this is the first paper on VA measurements based on fundus images using deep machine learning.

Details

Title
A Deep Learning Ensemble Method to Visual Acuity Measurement Using Fundus Images
Author
Kim, Jin Hyun 1   VIAFID ORCID Logo  ; Eunah Jo 1 ; Ryu, Seungjae 1 ; Nam, Sohee 1 ; Song, Somin 1 ; Han, Yong Seop 2 ; Kang, Tae Seen 2 ; Lee, Woongsup 1 ; Lee, Seongjin 1   VIAFID ORCID Logo  ; Kim, Kyong Hoon 3 ; Choi, Hyunju 4 ; Lee, Seunghwan 4 

 Department of AI Convergence Engineering, Gyeongsang National University, Jinju 52828, Korea; [email protected] (J.H.K.); [email protected] (E.J.); [email protected] (S.R.); [email protected] (S.N.); [email protected] (S.S.); [email protected] (S.L.) 
 Department of Ophthalmology, Institute of Health Sciences, Gyeongsang National University College of Medicine, Gyeongsang National University Changwon Hospital, Jinju 52828, Korea; [email protected] 
 School of Computer Science and Engineering, Kyungpook National University, Daegu 37224, Korea; [email protected] 
 Deepnoid Inc., Seoul 08376, Korea; [email protected] (H.C.); [email protected] (S.L.) 
First page
3190
Publication year
2022
Publication date
2022
Publisher
MDPI AG
e-ISSN
20763417
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2642352400
Copyright
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.