Introduction
The number of people with dementia is increasing worldwide [1]. Although prevention, treatment, and care through early detection are possible, dementia often remains unrecognized or undetected for a long time. Thus, a cost-effective method is required to detect early cognitive decline (CD) and dementia. Recently, the practical application of disease-modifying therapies (DMTs) for Alzheimer’s disease (AD) has been proposed. For example, lecanemab has been shown to reduce amyloid-β protein in early AD, resulting in a moderately slower decline in cognition and function compared to placebo [2]. To maximize the benefit of DMT, early medical consultation and diagnosis are essential for patients with CD. Despite the existence of nationwide dementia screening programs aimed at early detection, participation rates in Japan remain low. Accordingly, the number of cases where DMT could be beneficial but is not applied is expected to increase due to delays in detecting and diagnosing CD. While brain scans and body fluid biomarkers can detect the early stages of dementia, they are either invasive or expensive for screening purposes [3]. Therefore, a simple, non-invasive screening test for cognitive function that can be performed outside healthcare facilities is needed to encourage patients to seek medical attention.
Dementia is categorized as a neurocognitive disorder in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5). It encompasses a range of disorders characterized by cognitive impairments in areas such as attention, planning, inhibition, learning, memory, language, visual perception, spatial skills, and social skills [4]. In particular, language abilities are often impaired in the early stages of dementia, with symptoms such as aphasia, pauses, reduced vocabulary, and other language-related deficits [5]. AD, dementia with Lewy bodies (DLB), and vascular dementia (VaD) are the most common types of dementia worldwide. Previous studies have reported changes in syntactic complexity, lexical content, speech production, fluency, and semantic content during the early stages of AD, with language ability shown to correlate with overall cognitive function [6,7]. Patients with DLB exhibit reduced speech fluency, characterized by a slower overall speech rate and long pauses between sentences [8]. Language disturbances in VaD are similar to those in AD, with impairments in semantically mediated language tasks [9]. Thus, language is a suitable cognitive function for assessing CD in the early stages of dementia.
Recent advancements in artificial intelligence (AI) have highlighted its potential in revolutionizing dementia diagnosis and care. Agbavor and Liang demonstrated that large language models (LLMs), such as GPT-3, can predict dementia with high accuracy by analyzing spontaneous speech [10]. Their approach outperformed traditional acoustic feature-based methods and showed promise for early diagnosis through simple, non-invasive speech analysis. Similarly, Treder et al. emphasized the transformative potential of LLMs in dementia care and research, including their use in enhancing cognitive assessments and providing personalized interventions via accessible digital platforms [11]. Additionally, Agbavor and Liang proposed an end-to-end AI model utilizing Data2Vec, a self-supervised algorithm, for AD detection and severity prediction based on voice data [12]. This model offers a cost-effective and scalable solution for community-based AD screening. Building on these advancements, AI-based technologies now facilitate the development of efficient and accessible methods for early dementia detection.
AI is expected to improve screening performance by extracting more features from a single test with fewer errors resulting from subjective judgments [13]. In addition, analyzing large datasets allows AI-based digital biomarkers to capture more features, improving accuracy and enabling more objective inferences compared to manual analysis [14]. AI-based cognitive function assessments include computerized cognitive tests [15,16], computer-assisted interpretation of brain scan images [17], observation and evaluation of gait, hand, and eye movements [18–20], as well as speech, conversation, and language tests [21–23]. However, existing AI-based cognitive assessment methods often require specific environments and equipment, and none have yet become routine in clinical practice. Cognitive screening tools used outside medical institutions should be quick, convenient, and usable anywhere. Using conversational voices presents a simple and useful tool for cognitive screening because it does not rely on specialized environments or equipment. In recent years, the usefulness of high-level phonetic feature models, such as hidden unit bidirectional encoder representations from transformers (HuBERT), has been demonstrated in dementia detection [24,25]. We focused on how phonetic features in daily conversations reflect CD and aimed to develop a machine learning (ML)-based voice AI to detect CD from one-minute conversations.
Methods
Research outline
This study aimed to develop an ML-based voice AI capable of detecting CD from short conversational voice samples. The process involved five key steps: 1) collecting voice samples, 2) labeling the collected voice data, 3) voice feature extraction, 4) applying features to the deep-learning model, and 5) confirming the accuracy of the developed voice AI model using test voice data (Fig 1). The study was approved by the Ethics Committee of Showa University School of Medicine (approval number: 21–018-B) and was conducted in accordance with the principles of the Declaration of Helsinki (as revised in 2013).
[Figure omitted. See PDF.]
The collected voice samples and data labels were used for ML. The main ML procedures included voice feature extraction and deep learning. The accuracy of the model was confirmed using an ML-based voice AI system. ML: machine learning; MMSE: Mini-Mental State Examination; CN: cognitively normal; CD: cognitively declined; HuBERT: Hidden Unit BERT; MFCC: Mel-frequency cepstral coefficients.
Voice sample collection
Since no large-scale Japanese voice dataset for dementia detection exists, we created our own dataset. We enrolled consecutive patients aged 60 and older who consulted the Memory Clinic of the Department of Neurology, Showa University School of Medicine, Japan, for concerns related to memory loss between January 2022 and September 2023. All participants were of Japanese origin, and voice samples were collected in standard Japanese. The voice recordings were gathered during conversations while the participants engaged in original tasks and underwent neuropsychological assessments, including the Mini-Mental State Examination (MMSE), Hasegawa’s Dementia Scale-Revised (HDS-R), and the Montreal Cognitive Assessment (MoCA) [26–28]. The original tasks consisted of the following: 1) conversational speech about “something fun you experienced recently;” 2) responses to three meal-related questions: “What did you eat today?,” “Please describe the contents of your meals yesterday, starting with breakfast,” and “What was the most memorable meal?;” and 3) a picture description task using The Cookie Theft Picture [29]. All psychological tests and tasks were conducted face-to-face between the examiner (a psychologist or neurologist) and the participant in a quiet room with a noise level between 40 and 50 dB. Voice recordings were made using a 6th-generation iPad equipped with a microphone placed on a table between the examiner and the participant. Participants were informed that their conversations would be recorded during the examination. The recorded voice samples were stored on the iPad until the ML phase. Written informed consent was obtained from all participants.
Data labeling
The MMSE is one of the most widely employed tests for cognitive screening [26]. With a cutoff score set at 23/24, the combined sensitivity and specificity for detecting dementia have been reported as 0.81 and 0.89, respectively [30]. Voice data associated with MMSE scores of 23 or lower were labeled as “1” = CD, whereas those with MMSE scores of 24 or higher were labeled as “0” = cognitively normal (CN).
Voice feature extraction
We used a comprehensive approach, utilizing multiple voice features to detect potential signs of cognitive impairment. Through preprocessing methods and feature extraction models, our goal was to maximize the accuracy of the dementia detection system by implementing a neural network ML pipeline.
Preprocessing.
The preprocessing step involved standardizing and preparing the audio data for feature extraction. First, all audio files were converted to a consistent format with a 16-bit, 16,000 Hz mono waveform, ensuring uniformity across the dataset. Noise reduction techniques were applied using the voice enhancement model from ESPnet2 (https://github.com/espnet/espnet) to minimize background noise. Second, we preprocessed the training set of voice data using Pyannote-audio (https://github.com/pyannote/pyannote-audio), an open-source Python toolkit for speaker diarization, to separate the voices of the examinee and examiner. The test set was manually processed in the same format. Finally, the audio signals were normalized to ensure that variations in volume did not affect the feature extraction process.
HuBERT feature.
We used the HuBERT model to extract deep speech representations from the audio data. Unlike approaches such as those based on large language models that rely on linguistic features derived from transcription, our method focuses exclusively on acoustic features. To achieve this, we adopted HuBERT, a self-supervised model pre-trained on large-scale, unlabeled speech data, designed to capture both phonetic and prosodic information [24]. We used a pre-trained Japanese HuBERT model, provided by Rinna Co., Ltd. (https://huggingface.co/rinna/japanese-hubert-base), to extract features, leveraging its ability to encode both short-term and long-term dependencies in speech. The output from the last layers of HuBERT was used as high-level representations, serving as input to the subsequent dementia detection model.
Traditional acoustic features.
In addition to the HuBERT features, we extracted several traditional acoustic features commonly used in speech processing for dementia detection, using the librosa Python package for music and audio analysis (https://librosa.org/doc/latest/index.html). Silent interval features, which are indicative of CD, were extracted using the voiced_probs output from librosa.pyin. This probability differentiates between voiced and unvoiced segments of the audio signal, and we used these values directly as input for the next step. Fundamental frequency (F0) features were computed using librosa.pyin, which estimates pitch values from the audio signal. These values capture prosodic variations, such as pitch range, average pitch, and pitch stability, all of which are relevant for detecting speech irregularities associated with dementia. Mel-frequency cepstral coefficients (MFCCs) were extracted using librosa.feature.mfcc with an n_mfcc value of 20. For each audio segment, the maximum, mean, and delta values of the MFCCs were calculated and concatenated for the next input stage, allowing the capture of both static and dynamic spectral characteristics. These features offer valuable insights into the speaker’s articulation and vocal quality.
Deep learning-based dementia detection
The extracted features (HuBERT features, silent interval features, F0 features, and MFCCs) were used as input for an ML model designed to detect signs of dementia in speech. For each 60-second audio sample, feature extraction was performed every five seconds, with a one-second overlap between consecutive segments. The extracted features included HuBERT features (768 dimensions), silent interval features (100 dimensions), F0 features (100 dimensions), and MFCCs (60 dimensions). These features were then fed into a fully connected (FC) layer. The structure of the FC layer consisted of two layers, each with 1128 and 768 neurons, respectively. Each layer utilized ReLU activation functions. Following the FC layer, the time-step ordered features were fed into a bidirectional long short-term memory (Bi-LSTM) network, which was designed to capture temporal dependencies in the data. The Bi-LSTM network consisted of two layers with 512 hidden units each. After processing through the Bi-LSTM, the final hidden state was passed through two FC layers with 1024 and 512 neurons to perform the classification. Details of the hyperparameters and the neural network architecture are provided in the S1 and S2 Tables. The system was trained on a labeled dataset, where speech samples were labeled as “0” or “1” according to MMSE scores, with “0” representing CN individuals and “1” indicating CD. The final system was capable of automatically classifying speech samples as either dementia-positive or dementia-negative based on the extracted features.
Discrimination accuracy testing
Twenty voice samples were prepared to assess the discrimination accuracy of the ML-based voice AI model. The test data consisted of one-minute conversations about “something fun you experienced recently,” a segment from the original task. We chose this question to encourage patients to recall and describe personal episodic memories, which are more likely to elicit natural and emotionally rich open-ended responses compared to one-answer questions such as “what did you eat for lunch?” or “what is your favorite movie?”. Additionally, since the task involves approximately one minute of conversation, we aimed to provide a topic broad enough to allow for elaboration. None of the voice data used for testing were included in the model training. The CN test data encompassed individuals diagnosed with subjective CD (SCD) and mild cognitive impairment (MCI), whereas the CD test data consisted of individuals diagnosed with AD, DLB, and VaD, the three major types of dementia. The ML-based voice AI model outputs a probability (ranging from 0 to 1) indicating the likelihood that the voice belongs to a CD individual. A probability value of 0.5 or higher was set as the threshold for diagnosing CD.
Clinical diagnosis
All patients underwent a detailed interview, neurological examination by an experienced neurologist, blood tests, clinical dementia rating (CDR), MMSE, and brain MRI. Additional examinations were performed as necessary for clinical diagnosis. SCD was characterized by self-reported memory complaints, a CDR score of 0, MMSE scores within the normal range for cognition (MMSE ≥ 28), and no evidence of impairment in functional activities. The diagnosis of MCI was based on the criteria proposed by Petersen [31], which included a memory complaint corroborated by an informant, a global CDR score of 0.5, and a cognitive decline indicated by an MMSE score between 24 and 27, but with no evidence of functional impairment as revealed by the clinical interview. Diagnoses were made according to established guidelines for each condition. Alzheimer’s disease (AD) diagnoses followed the guidelines of the National Institute on Aging–Alzheimer’s Association workgroups [32]. Vascular cognitive disorders were diagnosed using the criteria established by the International Society for Vascular Behavioral and Cognitive Disorders [33]. The revised criteria for the clinical diagnosis of dementia with Lewy bodies (DLB) were applied [34], while the revised diagnostic criteria for the behavioral variant of frontotemporal dementia (FTD) were used for FTD cases [35]. Corticobasal degeneration (CBD) was diagnosed based on clinical criteria [36], and Parkinson’s disease (PD) was diagnosed using the International Parkinson and Movement Disorder Society criteria [37]. Lastly, idiopathic normal pressure hydrocephalus (iNPH) diagnoses followed the third edition of the Japanese Guidelines for Management of iNPH [38].
Statistics
An unpaired t-test was employed to analyze the differences in mean age, years of education, MMSE scores, and CDR scores between the CD and CN groups for voice samples used in model training and testing to confirm accuracy. A chi-square test was conducted to examine the male-to-female ratio of the samples. All tests were two-tailed and conducted using SPSS version 29.0.1.0 (IBM Corp., Armonk, NY, United States). Statistical significance was defined as an adjusted p-value of less than 0.05. The results are presented as mean and standard deviation (SD).
Results
Voice datasets for machine learning
Voice samples were collected from 285 consecutive patients who visited the Memory Clinic, with their consent to participate in the study. However, two patients withdrew their consent, leaving voice samples from 283 patients to be used in this study. For the accuracy confirmation test, 20 voice samples were selected, leaving 263 voice samples (155 females) for model training (Fig 2). The clinical diagnoses of the 263 patients included AD (n = 85, 32.3%), MCI (n = 78, 29.7%), SCD (n = 34, 12.9%), VaD (n = 17, 6.5%), DLB (n = 12, 4.6%), PD (n = 9, 3.4%), iNPH (n = 7, 2.7%), brain tumor (n = 4, 1.5%), multiple system atrophy (n = 3, 1.1%), depression (n = 3, 1.1%), CBD (n = 2, 0.8%), FTD (n = 2, 0.8%), and neuronal intranuclear inclusion disease (n = 1, 0.4%). Clinical diagnosis was not feasible for six patients (2.3%) owing to inadequate testing. Among the 263 samples, 113 samples (74 females) were categorized as CD (MMSE scores of 23 or lower). The remaining 150 voice samples were categorized as CN (MMSE scores of 24 or higher). A summary of the voice samples used for ML is given in Table 1.
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
MMSE: Mini-Mental State Examination.
Discrimination accuracy of ML-based voice AI model
The discrimination test used 20 voice datasets, comprising eight CD samples (four females, mean age 77.5 ± 9.8 years, mean education years 13.0 ± 2.7, mean MMSE score 18.4 ± 4.0, mean CDR 1.5 ± 0.5) and 12 CN samples (seven females, mean age 75.0 ± 9.5 years, mean education years 14.2 ± 2.0, mean MMSE score 26.8 ± 2.1, mean CDR 0.3 ± 0.3). No significant differences were observed in the percentage of females (p = 0.71), age (p = 0.56), or years of education (p = 0.16) between the CD and CN groups. However, patients with CD exhibited significantly lower MMSE scores (p = 0.0003) and higher CDR scores (p = 0.0002). The clinical diagnoses for the CN group included five cases of SCD and seven cases of MCI, while the CD group comprised five cases of AD, two cases of DLB, and one case of VaD (Table 2). The distribution of AD, DLB, and VaD cases in the CD group was determined to reflect the ratio of real-world prevalence. Following ML, the voice AI model was able to discriminate between CD and CN with an accuracy of 0.950, a sensitivity of 0.875 (probability of correctly identifying a CD as a CD), and a specificity of 1.000 (probability of correctly identifying a CN as a CN). The average area under the curve was 0.990 (Fig 3).
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
Discussion
This voice AI model is novel in its ability to discriminate between CD and CN individuals with high accuracy by analyzing one-minute conversation samples. The model’s high discrimination accuracy of 0.950, attained through a simple method, demonstrates the feasibility of using short conversational voice samples as a practical screening tool for analyzing cognitive function and alerting individuals to possible CD.
Extensive research on AI-based dementia assessment, particularly for AD, has been conducted worldwide. Practical digital biomarkers for diagnosing dementia can reduce the burden on clinical practice. Conversation and language abilities are impaired in the early stages of most types of dementia [8]. Recent studies have focused on AI-based assessments that employ speech and language. Typical testing procedures involve extracting relevant features and inputting them into machine- or deep-learning classifiers to identify patterns consistent with dementia. Two primary types of features—acoustic and linguistic—can be extracted and analyzed from human conversational voice [13]. Acoustic features describe how individuals articulate speech, while linguistic features pertain to the content, such as vocabulary, grammar, and syntax. According to a recent review, the extraction and analysis of linguistic features achieved better accuracy (0.925) than the utilization of acoustic features alone (0.786). Combining linguistic and acoustic features in AI analysis outperformed both, with an accuracy of 0.939 [13]. In contrast, the voice analysis AI developed in this study predominantly analyzed acoustic features and achieved a higher accuracy rate (0.950) than previous studies that used acoustic features alone [13]. The use of acoustic features alone may offer certain advantages over linguistic features: conversion errors are avoided because transcribing conversational speech into text is not required, and only a short conversation sample is needed for analysis.
Recently, two main types of tests have been developed to analyze conversational voices: the picture description test (where participants describe a picture while their voice is recorded) and interview-based conversations. In our study, although the voice data used for ML were obtained from both picture descriptions and interviews, the discrimination test was based only on one-minute conversations. This conversation was not provoked by a specific task, as in the picture description test, but was rather a spontaneous conversation based on an individual’s episodic memory, resembling an interview task. In interview-based diagnoses, subjects respond to multiple questions posed by humans or avatars, and their acoustic and linguistic features are analyzed to identify patients with CD or dementia [23,39,40]. However, these tests require subjects to answer multiple questions, making them time-consuming and potentially giving the impression that the subject is being tested for cognitive function. In contrast, our method of discriminating between CD and CN individuals using a short conversational voice sample obtained from just one question offers a simpler approach that could be widely used in clinical settings. Another advantage is that, unlike tasks with fixed correct answers, freeform conversations allow for open-ended answers, reducing the learning effect and making repeated administration easier. The proposed voice AI model identified CN individuals with 100% accuracy. The absence of false positives (that is, diagnosing a CN individual as CD) indicates the usefulness of this method as a screening tool in real clinical situations, helping to prevent unnecessary worry or anxiety in healthy individuals and avoiding the additional medical burden and costs associated with further testing.
However, this process has several limitations. The voice AI model developed in this study is capable of detecting CD. However, as dementia is defined as “a state of cognitive decline that interferes with daily and social life,” our voice AI model, which does not account for patients’ living contexts, is currently unable to determine whether an individual meets the clinical criteria for “dementia.” This study primarily focused on assessing the simple cognitive evaluation capabilities of the voice AI model. We plan to extend our work by incorporating machine learning approaches that analyze not only MMSE scores but also data from other neuropsychological assessments, patients’ daily and social life contexts, and biomarkers associated with dementia. In addition, we believe that regression analysis using numerical indicators such as the MMSE may be a particularly valuable tool in the clinical setting for assessing the severity of cognitive impairment in patients. We intend to consider this approach as a potential area for future research. It is essential to recognize that AI is not a substitute for human expertise but rather a powerful tool that enhances decision-making and supports accurate diagnosis. Determining whether the voice AI model can surpass clinicians’ diagnostic capabilities for dementia remains a critical area for future research. Additionally, our voice AI model cannot detect mild CD, as our MMSE cutoff score was set at 23/24. Identifying patients with cognitive impairment before progression to dementia, specifically at the MCI levels is crucial, as early intervention provides more opportunities for prevention, care, and effective treatment, such as DMT for AD. To identify cognitive impairment at the MCI level, it is necessary to adjust the MMSE cutoff score for data labeling and incorporate CDR results of 0.5, along with more appropriate neuropsychological tests, into the ML process. Although the MMSE is a widely used and simple neuropsychological test employed globally, and it demonstrates high sensitivity for moderate to severe cognitive impairment, it has the drawback of low sensitivity for detecting MCI [41]. Additionally, the MMSE includes relatively simple items for assessing language, which limits its utility in identifying mild language impairments. One of the psychological tests useful for detecting MCI is the Montreal Cognitive Assessment (MoCA). Using a cutoff score of 26, the MMSE showed a sensitivity of 18% for detecting MCI, whereas the MoCA identified 90% of individuals with MCI [28]. The MoCA is also a simple cognitive screening tool with high sensitivity and specificity for detecting currently conceptualized MCI, even in patients who fall within the normal range on the MMSE. Future research should include additional ML using voice samples from individuals with milder CD to explore the potential for detecting such conditions with higher accuracy.
Another limitation is that voice samples were collected only once per individual, making it challenging to evaluate the consistency of the AI model’s decisions for the same individual over time. This raises the possibility that discrimination results could vary depending on daily conditions, such as lack of sleep, alcohol consumption, or temporary forgetfulness, which could lead to incorrect decisions. Additionally, as this study was conducted on patients attending the Memory Clinic at our hospital, recruiting cognitively impaired individuals under the age of 60 was challenging. As voice samples were exclusively collected from individuals aged 60 years and older, our voice AI model currently lacks the capability to detect early-onset dementia in younger populations. Obtaining voice samples from younger individuals would indeed be feasible, and we recognize the value of extending our research to include this population. We plan to explore the inclusion of younger age groups in future studies to address this limitation and enhance the model’s generalizability. Moreover, we used only standard Japanese voice samples for ML, and while Japanese dialects exist, the model’s effectiveness with dialects remains unknown. To solve these problems, a more robust model could be developed by adding voice samples from younger individuals or those speaking in dialects, potentially sourced from TV or web content, for pre-training the HuBERT model.
In recent years, significant progress has been made in developing AI systems that use observational methods by integrating multiple assessments to assess the impact of CD on daily life in the elderly. While it remains challenging to accurately distinguish CD individuals using only simple evaluation methods, our voice AI model enables the development of AI-based medical software for detecting CD using minute-long conversations, accessible via mobile devices such as smartphones. Digital biomarkers based on language can also be used for detecting mental disorders such as depression [42]. We believe that an AI-assisted, simple cognitive function screening tool, using short conversational voice samples, can be valuable in detecting CD and encouraging people to seek medical attention. In this study, as an initial experiment, we validated the model’s accuracy using the holdout method with fixed samples. To enhance its applicability for real-world implementation, we plan to incorporate cross-validation in future evaluations.
Supporting information
S1 Table. Neural Network Architecture.
https://doi.org/10.1371/journal.pone.0325177.s001
(DOCX)
S2 Table. Hyperparameters.
https://doi.org/10.1371/journal.pone.0325177.s002
(DOCX)
S3 Data. Datasets Used for Machine Learning and Discrimination Test.
https://doi.org/10.1371/journal.pone.0325177.s003
(XLSX)
Acknowledgments
We thank M. Miyanohara, a psychologist, for her contribution to collecting voice samples and conducting neuropsychological tests. We also thank the staff of the Department of Neurology, Showa University School of Medicine, for their cooperation during the study.
References
1. 1. GBD 2019 Dementia Forecasting Collaborators. Estimation of the global prevalence of dementia in 2019 and forecasted prevalence in 2050: an analysis for the Global Burden of Disease Study 2019. Lancet Public Health. 2022;7(2):e105–25. pmid:34998485
* View Article
* PubMed/NCBI
* Google Scholar
2. 2. van Dyck CH, Swanson CJ, Aisen P, Bateman RJ, Chen C, Gee M, et al. Lecanemab in Early Alzheimer’s Disease. N Engl J Med. 2023;388(1):9–21. pmid:36449413
* View Article
* PubMed/NCBI
* Google Scholar
3. 3. Blennow K, Hampel H, Weiner M, Zetterberg H. Cerebrospinal fluid and plasma biomarkers in Alzheimer disease. Nat Rev Neurol. 2010;6(3):131–44. pmid:20157306
* View Article
* PubMed/NCBI
* Google Scholar
4. 4. American Psychiatric Association. Diagnostic and statistical manual of mental disorders. 5th ed. Arlington (VA): American Psychiatric Publishing; 2013.
5. 5. Kempler D, Goral M. Language and dementia: neuropsychological aspects. Annu Rev Appl Linguist. 2008;28:73–90. pmid:21072322
* View Article
* PubMed/NCBI
* Google Scholar
6. 6. Ahmed S, Haigh A-MF, de Jager CA, Garrard P. Connected speech as a marker of disease progression in autopsy-proven Alzheimer’s disease. Brain. 2013;136(Pt 12):3727–37. pmid:24142144
* View Article
* PubMed/NCBI
* Google Scholar
7. 7. Weiner MF, Neubecker KE, Bret ME, Hynan LS. Language in Alzheimer’s disease. J Clin Psychiatry. 2008;69(8):1223–7. pmid:18505305
* View Article
* PubMed/NCBI
* Google Scholar
8. 8. Ash S, McMillan C, Gross RG, Cook P, Gunawardena D, Morgan B, et al. Impairments of speech fluency in Lewy body spectrum disorder. Brain Lang. 2012;120(3):290–302. pmid:22099969
* View Article
* PubMed/NCBI
* Google Scholar
9. 9. Vuorinen E, Laine M, Rinne J. Common pattern of language impairment in vascular dementia and in Alzheimer disease. Alzheimer Dis Assoc Disord. 2000;14(2):81–6. pmid:10850746
* View Article
* PubMed/NCBI
* Google Scholar
10. 10. Agbavor F, Liang H. Predicting dementia from spontaneous speech using large language models. PLOS Digit Health. 2022;1(12):e0000168. pmid:36812634
* View Article
* PubMed/NCBI
* Google Scholar
11. 11. Treder MS, Lee S, Tsvetanov KA. Introduction to Large Language Models (LLMs) for dementia care and research. Front Dement. 2024;3:1385303. pmid:39081594
* View Article
* PubMed/NCBI
* Google Scholar
12. 12. Agbavor F, Liang H. Artificial intelligence-enabled end-to-end detection and assessment of alzheimer’s disease using voice. Brain Sci. 2022;13(1):28. pmid:36672010
* View Article
* PubMed/NCBI
* Google Scholar
13. 13. Li R, Wang X, Lawler K, Garg S, Bai Q, Alty J. Applications of artificial intelligence to aid early detection of dementia: A scoping review on current capabilities and future directions. J Biomed Inform. 2022;127:104030. pmid:35183766
* View Article
* PubMed/NCBI
* Google Scholar
14. 14. Danso SO, Muniz-Terrera G, Luz S, Ritchie C, Global Dementia Prevention Program (GloDePP). Application of Big Data and Artificial Intelligence technologies to dementia prevention research: an opportunity for low-and-middle-income countries. J Glob Health. 2019;9(2):020322. pmid:32257177
* View Article
* PubMed/NCBI
* Google Scholar
15. 15. Angelillo M, Balducci F, Impedovo D, Pirlo G, Vessio G. Attentional pattern classification for automatic dementia detection. IEEE Access. 2019;7:57706–16.
* View Article
* Google Scholar
16. 16. Thabtah F, Mampusti E, Peebles D, Herradura R, Varghese J. A mobile-based screening system for data analyses of early dementia traits detection. J Med Syst. 2019;44(1):24. pmid:31828523
* View Article
* PubMed/NCBI
* Google Scholar
17. 17. Pellegrini E, Ballerini L, Hernandez MDCV, Chappell FM, González-Castro V, Anblagan D, et al. Machine learning of neuroimaging for assisted diagnosis of cognitive impairment and dementia: A systematic review. Alzheimers Dement (Amst). 2018;10:519–35. pmid:30364671
* View Article
* PubMed/NCBI
* Google Scholar
18. 18. Mc Ardle R, Del Din S, Galna B, Thomas A, Rochester L. Differentiating dementia disease subtypes with gait analysis: feasibility of wearable sensors?. Gait Posture. 2020;76:372–6. pmid:31901765
* View Article
* PubMed/NCBI
* Google Scholar
19. 19. Sano Y, Kandori A, Shima K, Yamaguchi Y, Tsuji T, Noda M, et al. Detection of abnormal segments in finger tapping waveform using one-class SVM. In: 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE; 2019. p. 1378–81.
20. 20. Tadokoro K, Yamashita T, Kimura Y, Nomura E, Ohta Y, Omote Y. Early detection of cognitive decline in mild cognitive impairment and Alzheimer’s disease with a novel eye tracking test. J Neurol Sci. 2021;427:117529.
* View Article
* Google Scholar
21. 21. Haider F, de la Fuente S, Luz S. An assessment of paralinguistic acoustic features for detection of Alzheimer’s dementia in spontaneous speech. IEEE J Sel Top Signal Process. 2019;14(2):272–81.
* View Article
* Google Scholar
22. 22. Liu Z, Guo Z, Jiang Z, Tang H, Chen X, Zhang K. Dementia detection by analyzing spontaneous mandarin speech. In: 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE; 2019. p. 289–96.
23. 23. Ujiro T, Tanaka H, Adachi H, Kazui H, Ikeda M, Kudo T. Detection of dementia from responses to atypical questions asked by embodied conversational agents. In: Interspeech. ISCA; 2018. p. 1691–5.
24. 24. Hsu W, Bolte B, Tsai Y, Lakhotia K, Salakhutdinov R, Mohamed A. Hubert: self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM Trans Audio Speech Lang Process. 2021;29:3451–60.
* View Article
* Google Scholar
25. 25. Liu J, Fu F, Li L, Yu J, Zhong D, Zhu S, et al. Efficient Pause Extraction and Encode Strategy for Alzheimer’s disease detection using only acoustic features from spontaneous speech. Brain Sci. 2023;13(3):477. pmid:36979287
* View Article
* PubMed/NCBI
* Google Scholar
26. 26. Folstein MF, Folstein SE, McHugh PR. “Mini-mental state”. A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. 1975;12(3):189–98. pmid:1202204
* View Article
* PubMed/NCBI
* Google Scholar
27. 27. Imai Y, Hasegawa K. The revised hasegawa’s dementia scale (hds-r) - evaluation of its usefulness as a screening test for dementia. J Hong Kong Coll Psychiatr. 1994;4:20–4.
* View Article
* Google Scholar
28. 28. Nasreddine ZS, Phillips NA, Bédirian V, Charbonneau S, Whitehead V, Collin I, et al. The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment. J Am Geriatr Soc. 2005;53(4):695–9. pmid:15817019
* View Article
* PubMed/NCBI
* Google Scholar
29. 29. Goodglass H, Kaplan E. The assessment of aphasia and related disorders. Philadelphia: Lea & Febiger; 1972.
30. 30. Tsoi KKF, Chan JYC, Hirai HW, Wong SYS, Kwok TCY. Cognitive tests to detect dementia: a systematic review and meta-analysis. JAMA Intern Med. 2015;175(9):1450–8. pmid:26052687
* View Article
* PubMed/NCBI
* Google Scholar
31. 31. Petersen RC. Mild cognitive impairment as a diagnostic entity. J Intern Med. 2004;256(3):183–94. pmid:15324362
* View Article
* PubMed/NCBI
* Google Scholar
32. 32. McKhann GM, Knopman DS, Chertkow H, Hyman BT, Jack CR Jr, Kawas CH, et al. The diagnosis of dementia due to Alzheimer’s disease: recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimers Dement. 2011;7(3):263–9. pmid:21514250
* View Article
* PubMed/NCBI
* Google Scholar
33. 33. Sachdev P, Kalaria R, O’Brien J, Skoog I, Alladi S, Black S. Diagnostic criteria for vascular cognitive disorders: a VASCOG statement. Alzheimer Dis Assoc Disord. 2014;28(3):206–18.
* View Article
* Google Scholar
34. 34. McKeith IG, Boeve BF, Dickson DW, Halliday G, Taylor J-P, Weintraub D, et al. Diagnosis and management of dementia with Lewy bodies: Fourth consensus report of the DLB Consortium. Neurology. 2017;89(1):88–100. pmid:28592453
* View Article
* PubMed/NCBI
* Google Scholar
35. 35. Rascovsky K, Hodges J, Knopman D, Mendez M, Kramer J, Neuhaus J. Sensitivity of revised diagnostic criteria for the behavioural variant of frontotemporal dementia. Brain. 2011;134(9):2456–77.
* View Article
* Google Scholar
36. 36. Armstrong MJ, Litvan I, Lang AE, Bak TH, Bhatia KP, Borroni B, et al. Criteria for the diagnosis of corticobasal degeneration. Neurology. 2013;80(5):496–503. pmid:23359374
* View Article
* PubMed/NCBI
* Google Scholar
37. 37. Postuma RB, Berg D, Stern M, Poewe W, Olanow CW, Oertel W, et al. MDS clinical diagnostic criteria for Parkinson’s disease. Mov Disord. 2015;30(12):1591–601. pmid:26474316
* View Article
* PubMed/NCBI
* Google Scholar
38. 38. Nakajima M, Miyajima M, Ogino I, Akiba C, Sugano H, Hara T. Guidelines for management of idiopathic normal pressure hydrocephalus (third edition): endorsed by the Japanese society of normal pressure hydrocephalus. Neurol Med Chir (Tokyo). 2021;61(2):63–97.
* View Article
* Google Scholar
39. 39. Mirheidari B, Blackburn D, Walker T, Reuber M, Christensen H. Dementia detection using automatic analysis of conversations. Comput Speech Lang. 2019;53:65–79.
* View Article
* Google Scholar
40. 40. Tanaka H, Adachi H, Ukita N, Ikeda M, Kazui H, Kudo T, et al. Detecting dementia through interactive computer avatars. IEEE J Transl Eng Health Med. 2017;5:2200111. pmid:29018636
* View Article
* PubMed/NCBI
* Google Scholar
41. 41. Tombaugh TN, McIntyre NJ. The mini-mental state examination: a comprehensive review. J Am Geriatr Soc. 1992;40(9):922–35. pmid:1512391
* View Article
* PubMed/NCBI
* Google Scholar
42. 42. Reilly J, Rodriguez AD, Lamy M, Neils-Strunjas J. Cognition, language, and clinical pathological features of non-Alzheimer’s dementias: an overview. J Commun Disord. 2010;43(5):438–52. pmid:20493496
* View Article
* PubMed/NCBI
* Google Scholar
Citation: Kuroda T, Ono K, Onishi M, Murakami K, Shoji D, Kosuge S, et al. (2025) Utility of artificial intelligence-based conversation voice analysis for detecting cognitive decline. PLoS One 20(6): e0325177. https://doi.org/10.1371/journal.pone.0325177
About the Authors:
Takeshi Kuroda
Roles: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Writing – original draft, Writing – review & editing
E-mail: [email protected]
Affiliation: Department of Neurology, Showa University School of Medicine, Tokyo, Japan
ORICD: https://orcid.org/0000-0001-5730-7415
Kenjiro Ono
Roles: Conceptualization, Supervision, Writing – review & editing
Affiliation: Department of Neurology, Kanazawa University Graduate School of Medical Sciences, Kanazawa, Japan
Masaki Onishi
Roles: Data curation, Formal analysis, Writing – original draft
Affiliation: ExaWizards Inc., Tokyo, Japan
Kouzou Murakami
Roles: Conceptualization, Methodology, Project administration
Affiliation: Department of Radiology, Showa University School of Medicine, Tokyo, Japan
Daiki Shoji
Roles: Data curation, Formal analysis, Investigation
Affiliation: Department of Neurology, Showa University School of Medicine, Tokyo, Japan
Shota Kosuge
Roles: Data curation, Formal analysis, Investigation
Affiliation: Department of Neurology, Showa University School of Medicine, Tokyo, Japan
Atsushi Ishida
Roles: Data curation, Formal analysis, Investigation
Affiliation: Department of Neurology, Showa University School of Medicine, Tokyo, Japan
Sotaro Hieda
Roles: Data curation, Formal analysis, Investigation, Project administration
Affiliations: Department of Neurology, Showa University School of Medicine, Tokyo, Japan, Department of Neurology, Kawasaki Memorial Hospital, Kanagawa, Japan
Masato Takahashi
Roles: Project administration
Affiliation: ExaWizards Inc., Tokyo, Japan
Hisashi Nakashima
Roles: Project administration
Affiliation: ExaWizards Inc., Tokyo, Japan
Yoshinori Ito
Roles: Project administration, Supervision
Affiliation: Department of Radiology, Showa University School of Medicine, Tokyo, Japan
Hidetomo Murakami
Roles: Project administration, Supervision
Affiliation: Department of Neurology, Showa University School of Medicine, Tokyo, Japan
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
1. GBD 2019 Dementia Forecasting Collaborators. Estimation of the global prevalence of dementia in 2019 and forecasted prevalence in 2050: an analysis for the Global Burden of Disease Study 2019. Lancet Public Health. 2022;7(2):e105–25. pmid:34998485
2. van Dyck CH, Swanson CJ, Aisen P, Bateman RJ, Chen C, Gee M, et al. Lecanemab in Early Alzheimer’s Disease. N Engl J Med. 2023;388(1):9–21. pmid:36449413
3. Blennow K, Hampel H, Weiner M, Zetterberg H. Cerebrospinal fluid and plasma biomarkers in Alzheimer disease. Nat Rev Neurol. 2010;6(3):131–44. pmid:20157306
4. American Psychiatric Association. Diagnostic and statistical manual of mental disorders. 5th ed. Arlington (VA): American Psychiatric Publishing; 2013.
5. Kempler D, Goral M. Language and dementia: neuropsychological aspects. Annu Rev Appl Linguist. 2008;28:73–90. pmid:21072322
6. Ahmed S, Haigh A-MF, de Jager CA, Garrard P. Connected speech as a marker of disease progression in autopsy-proven Alzheimer’s disease. Brain. 2013;136(Pt 12):3727–37. pmid:24142144
7. Weiner MF, Neubecker KE, Bret ME, Hynan LS. Language in Alzheimer’s disease. J Clin Psychiatry. 2008;69(8):1223–7. pmid:18505305
8. Ash S, McMillan C, Gross RG, Cook P, Gunawardena D, Morgan B, et al. Impairments of speech fluency in Lewy body spectrum disorder. Brain Lang. 2012;120(3):290–302. pmid:22099969
9. Vuorinen E, Laine M, Rinne J. Common pattern of language impairment in vascular dementia and in Alzheimer disease. Alzheimer Dis Assoc Disord. 2000;14(2):81–6. pmid:10850746
10. Agbavor F, Liang H. Predicting dementia from spontaneous speech using large language models. PLOS Digit Health. 2022;1(12):e0000168. pmid:36812634
11. Treder MS, Lee S, Tsvetanov KA. Introduction to Large Language Models (LLMs) for dementia care and research. Front Dement. 2024;3:1385303. pmid:39081594
12. Agbavor F, Liang H. Artificial intelligence-enabled end-to-end detection and assessment of alzheimer’s disease using voice. Brain Sci. 2022;13(1):28. pmid:36672010
13. Li R, Wang X, Lawler K, Garg S, Bai Q, Alty J. Applications of artificial intelligence to aid early detection of dementia: A scoping review on current capabilities and future directions. J Biomed Inform. 2022;127:104030. pmid:35183766
14. Danso SO, Muniz-Terrera G, Luz S, Ritchie C, Global Dementia Prevention Program (GloDePP). Application of Big Data and Artificial Intelligence technologies to dementia prevention research: an opportunity for low-and-middle-income countries. J Glob Health. 2019;9(2):020322. pmid:32257177
15. Angelillo M, Balducci F, Impedovo D, Pirlo G, Vessio G. Attentional pattern classification for automatic dementia detection. IEEE Access. 2019;7:57706–16.
16. Thabtah F, Mampusti E, Peebles D, Herradura R, Varghese J. A mobile-based screening system for data analyses of early dementia traits detection. J Med Syst. 2019;44(1):24. pmid:31828523
17. Pellegrini E, Ballerini L, Hernandez MDCV, Chappell FM, González-Castro V, Anblagan D, et al. Machine learning of neuroimaging for assisted diagnosis of cognitive impairment and dementia: A systematic review. Alzheimers Dement (Amst). 2018;10:519–35. pmid:30364671
18. Mc Ardle R, Del Din S, Galna B, Thomas A, Rochester L. Differentiating dementia disease subtypes with gait analysis: feasibility of wearable sensors?. Gait Posture. 2020;76:372–6. pmid:31901765
19. Sano Y, Kandori A, Shima K, Yamaguchi Y, Tsuji T, Noda M, et al. Detection of abnormal segments in finger tapping waveform using one-class SVM. In: 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE; 2019. p. 1378–81.
20. Tadokoro K, Yamashita T, Kimura Y, Nomura E, Ohta Y, Omote Y. Early detection of cognitive decline in mild cognitive impairment and Alzheimer’s disease with a novel eye tracking test. J Neurol Sci. 2021;427:117529.
21. Haider F, de la Fuente S, Luz S. An assessment of paralinguistic acoustic features for detection of Alzheimer’s dementia in spontaneous speech. IEEE J Sel Top Signal Process. 2019;14(2):272–81.
22. Liu Z, Guo Z, Jiang Z, Tang H, Chen X, Zhang K. Dementia detection by analyzing spontaneous mandarin speech. In: 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE; 2019. p. 289–96.
23. Ujiro T, Tanaka H, Adachi H, Kazui H, Ikeda M, Kudo T. Detection of dementia from responses to atypical questions asked by embodied conversational agents. In: Interspeech. ISCA; 2018. p. 1691–5.
24. Hsu W, Bolte B, Tsai Y, Lakhotia K, Salakhutdinov R, Mohamed A. Hubert: self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM Trans Audio Speech Lang Process. 2021;29:3451–60.
25. Liu J, Fu F, Li L, Yu J, Zhong D, Zhu S, et al. Efficient Pause Extraction and Encode Strategy for Alzheimer’s disease detection using only acoustic features from spontaneous speech. Brain Sci. 2023;13(3):477. pmid:36979287
26. Folstein MF, Folstein SE, McHugh PR. “Mini-mental state”. A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. 1975;12(3):189–98. pmid:1202204
27. Imai Y, Hasegawa K. The revised hasegawa’s dementia scale (hds-r) - evaluation of its usefulness as a screening test for dementia. J Hong Kong Coll Psychiatr. 1994;4:20–4.
28. Nasreddine ZS, Phillips NA, Bédirian V, Charbonneau S, Whitehead V, Collin I, et al. The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment. J Am Geriatr Soc. 2005;53(4):695–9. pmid:15817019
29. Goodglass H, Kaplan E. The assessment of aphasia and related disorders. Philadelphia: Lea & Febiger; 1972.
30. Tsoi KKF, Chan JYC, Hirai HW, Wong SYS, Kwok TCY. Cognitive tests to detect dementia: a systematic review and meta-analysis. JAMA Intern Med. 2015;175(9):1450–8. pmid:26052687
31. Petersen RC. Mild cognitive impairment as a diagnostic entity. J Intern Med. 2004;256(3):183–94. pmid:15324362
32. McKhann GM, Knopman DS, Chertkow H, Hyman BT, Jack CR Jr, Kawas CH, et al. The diagnosis of dementia due to Alzheimer’s disease: recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimers Dement. 2011;7(3):263–9. pmid:21514250
33. Sachdev P, Kalaria R, O’Brien J, Skoog I, Alladi S, Black S. Diagnostic criteria for vascular cognitive disorders: a VASCOG statement. Alzheimer Dis Assoc Disord. 2014;28(3):206–18.
34. McKeith IG, Boeve BF, Dickson DW, Halliday G, Taylor J-P, Weintraub D, et al. Diagnosis and management of dementia with Lewy bodies: Fourth consensus report of the DLB Consortium. Neurology. 2017;89(1):88–100. pmid:28592453
35. Rascovsky K, Hodges J, Knopman D, Mendez M, Kramer J, Neuhaus J. Sensitivity of revised diagnostic criteria for the behavioural variant of frontotemporal dementia. Brain. 2011;134(9):2456–77.
36. Armstrong MJ, Litvan I, Lang AE, Bak TH, Bhatia KP, Borroni B, et al. Criteria for the diagnosis of corticobasal degeneration. Neurology. 2013;80(5):496–503. pmid:23359374
37. Postuma RB, Berg D, Stern M, Poewe W, Olanow CW, Oertel W, et al. MDS clinical diagnostic criteria for Parkinson’s disease. Mov Disord. 2015;30(12):1591–601. pmid:26474316
38. Nakajima M, Miyajima M, Ogino I, Akiba C, Sugano H, Hara T. Guidelines for management of idiopathic normal pressure hydrocephalus (third edition): endorsed by the Japanese society of normal pressure hydrocephalus. Neurol Med Chir (Tokyo). 2021;61(2):63–97.
39. Mirheidari B, Blackburn D, Walker T, Reuber M, Christensen H. Dementia detection using automatic analysis of conversations. Comput Speech Lang. 2019;53:65–79.
40. Tanaka H, Adachi H, Ukita N, Ikeda M, Kazui H, Kudo T, et al. Detecting dementia through interactive computer avatars. IEEE J Transl Eng Health Med. 2017;5:2200111. pmid:29018636
41. Tombaugh TN, McIntyre NJ. The mini-mental state examination: a comprehensive review. J Am Geriatr Soc. 1992;40(9):922–35. pmid:1512391
42. Reilly J, Rodriguez AD, Lamy M, Neils-Strunjas J. Cognition, language, and clinical pathological features of non-Alzheimer’s dementias: an overview. J Commun Disord. 2010;43(5):438–52. pmid:20493496
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2025 Kuroda et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Recent developments in artificial intelligence (AI) have introduced new technologies that can aid in detecting cognitive decline. This study developed a voice-based AI model that screens for cognitive decline using only a short conversational voice sample. The process involved collecting voice samples, applying machine learning (ML), and confirming accuracy through test data. The AI model extracts multiple voice features from the collected voice data to detect potential signs of cognitive impairment. Data labeling for ML was based on Mini-Mental State Examination scores: scores of 23 or lower were labeled as “cognitively declined (CD),” while scores above 24 were labeled as “cognitively normal (CN).” A fully coupled neural network architecture was employed for deep learning, using voice samples from 263 patients. Twenty voice samples, each comprising a one-minute conversation, were used for accuracy evaluation. The developed AI model achieved an accuracy of 0.950 in discriminating between CD and CN individuals, with a sensitivity of 0.875, specificity of 1.000, and an average area under the curve of 0.990. This voice AI model shows promise as a cognitive screening tool accessible via mobile devices, requiring no specialized environments or equipment, and can help detect CD, offering individuals the opportunity to seek medical attention.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer