summary: AI can detect signs of mild cognitive decline and Alzheimer’s disease, even when no symptoms are present, by analyzing a person’s speech. The technology can be used as a simple screening method to identify early signs of cognitive impairment.
source: Southwestern Utah
New technologies that can pick up subtle changes in a patient’s voice may help doctors diagnose cognitive impairment and Alzheimer’s disease before symptoms begin to appear, according to a University of Texas Southwestern Medical Center researcher who led a study published in an Alzheimer’s Association publication. Diagnosis, evaluation and disease control.
“Our focus was to identify subtle language and vocal changes that are present in the very early stages of Alzheimer’s disease but are not readily recognizable by family members or an individual’s primary care physician,” said Ihab Hajjar, MD, professor of neurology at Southwestern University, Peter. O’Donnell Jr Brain Institute.
The researchers used advanced machine learning and natural language processing (NLP) tools to assess the speech patterns of 206 people – 114 who met the criteria for mild cognitive decline and 92 who were unaffected. The team then mapped those findings to commonly used biomarkers to determine their effectiveness in measuring vulnerability.
The study participants, who were enrolled in a research program at Emory University in Atlanta, took several standard cognitive assessments before they were asked to record a one- to two-minute automatic description of the artwork.
“The recorded descriptions of the image provided us with a rough approximation of conversational abilities that we can study through AI to determine speech control, idea density, syntactic complexity, and other speech features,” said Dr. Hajjar.
The research team compared the participants’ speech analyzes with cerebrospinal fluid samples and MRI scans to determine how accurately digital audio biomarkers detected both mild cognitive impairment and Alzheimer’s disease status and progression.
“Prior to the development of machine learning and natural language processing, the detailed study of patients’ speech patterns was very labor intensive and often unsuccessful because changes in the early stages were often undetectable in the human ear,” said Dr. Hajjar.
“This new method of testing did well in detecting those with mild cognitive impairment and more specifically in identifying patients with evidence of Alzheimer’s disease – even when it cannot be easily detected using standard cognitive assessments.”
During the study, the researchers spent less than 10 minutes recording the patient’s voice. Traditional neuropsychological tests usually take several hours to administer.
“If confirmed by larger studies, using artificial intelligence and machine learning to study audio recordings could provide primary care providers with an easy-to-perform screening tool for high-risk individuals,” said Dr. Hajjar. “Earlier diagnoses will give patients and their families more time to plan ahead and give physicians greater flexibility in recommending promising lifestyle interventions.”
Dr. Hajjar collaborated on this study with a team of researchers at Emory, where he previously served as Director of the Clinical Trials Unit at the Goizueta Alzheimer’s Disease Research Center prior to joining UTSW in 2022. He continues to collect audio recordings in Dallas as part of a follow-up study at UTSW. It is funded by a grant from the National Institutes of Health.
Funding: Research for this study was supported by grants from the National Institutes of Health/National Institute on Aging (AG051633, AG057470-01, AG042127) and the Alzheimer’s Drug Discovery Foundation (20150603).
Dr. Hajjar holds the Pogue University Distinguished Chair in Alzheimer’s Research and Clinical Care, in memory of Maureen and David Wiggers McMullan.
About this news Artificial intelligence research and Alzheimer’s disease
author: press office
source: Southwestern Utah
communication: Press Office – UT Southwestern
picture: The image is in the public domain
Original search: Closed access.
“Development of digital biomarkers of sound and associations with cognition, cerebral-spinal biomarkers, and neurometabolism in early-onset Alzheimer’s disease” by Ihab Hajjar et al. Alzheimer’s disease and dementia: diagnosis, evaluation, and disease control
Development of digital biomarkers of sound and associations with cognition, cerebral-spinal biomarkers, and neurometabolism in early-onset Alzheimer’s disease.
Advances in natural language processing (NLP), speech recognition, and machine learning (ML) allow exploration of linguistic and phonological changes that were previously difficult to measure. We have developed processes for deriving lexical, semantic, and phonological measures as numerical biomarkers of Alzheimer’s disease (AD).
We collected speech-related, neuropsychological, neuroimaging, and cerebrospinal fluid biomarker data from 92 non-cognitively impaired (40 Aβ+) and 114 impaired (63 Aβ+) participants. Phonetic and lexical-semantic features were derived from audio recordings using ML approaches.
Lexical-semantic (area under the curve [AUC] = 0.80) and acoustic scores (AUC = 0.77) showed a higher diagnostic performance for detecting MCI than the Boston Nomenclature Test (AUC = 0.66). Semantic lexical scores detected only amyloid- (s = 0.0003). Vocal pitches associated with hippocampal volume (s = 0.017) while the lexical-semantic scores associated with CSF amyloid-β (s = 0.007). Both measures were significantly associated with 2-year disease progression.
These preliminary results indicate that derived digital biomarkers may identify cognitive impairment in preclinical and prodromal Alzheimer’s disease, and may predict disease progression.