Full Text

Turn on search term navigation

© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Traditional screening methods for Mild Cognitive Impairment (MCI) face limitations in accessibility and scalability. To address this, we developed and validated a speech-based automatic screening app implementing three speech–language tasks with user-centered design and server–client architecture. The app integrates automated speech processing and SVM classifiers for MCI detection. Functionality validation included comparison with manual assessment and testing in real-world settings (n = 12), with user engagement evaluated separately (n = 22). The app showed comparable performance with manual assessment (F1 = 0.93 vs. 0.95) and maintained reliability in real-world settings (F1 = 0.86). Task engagement significantly influenced speech patterns: users rating tasks as “most interesting” produced more speech content (p < 0.05), though behavioral observations showed consistent cognitive processing across perception groups. User engagement analysis revealed high technology acceptance (86%) across educational backgrounds, with daily cognitive exercise habits significantly predicting task benefit perception (H = 9.385, p < 0.01). Notably, perceived task difficulty showed no significant correlation with cognitive performance (p = 0.119), suggesting the system’s accessibility to users of varying abilities. While preliminary, the mobile app demonstrated both robust assessment capabilities and sustained user engagement, suggesting the potential viability of widespread cognitive screening in the geriatric population.

Details

Title
A Speech-Based Mobile Screening Tool for Mild Cognitive Impairment: Technical Performance and User Engagement Evaluation
Author
Ruzi, Rukiye 1   VIAFID ORCID Logo  ; Pan, Yue 2 ; Ng, Menwa Lawrence 3   VIAFID ORCID Logo  ; Su, Rongfeng 1   VIAFID ORCID Logo  ; Wang, Lan 1 ; Dang, Jianwu 1 ; Liu, Liwei 2 ; Yan, Nan 1 

 Guangdong-Hong Kong-Macao Joint Laboratory of Human–Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; [email protected] (R.R.); [email protected] (R.S.); [email protected] (L.W.); [email protected] (J.D.) 
 Advanced Computing and Storage Laboratory, Central Research Institute, 2012 Laboratories, Huawei Technologies Co., Ltd., Nanjing 210012, China; [email protected] 
 Speech Science Laboratory, Faculty of Education, University of Hong Kong, Hong Kong SAR, China; [email protected] 
First page
108
Publication year
2025
Publication date
2025
Publisher
MDPI AG
e-ISSN
23065354
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3170946700
Copyright
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.