Researchers are striving to make earlier diagnosis of Alzheimer’s dementia possible with a machine-learning (ML) model that could one day be turned into a simple screening tool anyone with a smartphone could use. The model was able to distinguish Alzheimer’s patients from healthy controls with 70 to 75 percent accuracy, a promising figure for the more than 747,000 Canadians who have Alzheimer’s or another form of dementia.
Alzheimer’s dementia can be challenging to detect at early stages, because the symptoms often start out quite subtle and can be confused with memory-related issues, typical of advanced age. But as the researchers note, the earlier potential issues are detected, the sooner patients can begin to take action.
“Before, you’d need lab work, and medical imaging, to detect brain changes; this takes time, it’s expensive, and nobody gets tested this early on,” said Eleni Stroulia, a professor in the Department of Computing Science who was involved in creating the model. “If you could use mobile phones to get an early indicator, that would be informing the relationship of the patient with their physician. It would potentially start the treatment earlier, and we could even start with simple interventions at home, also with mobile devices, to slow the progression down.”
A screening tool would not take the place of healthcare professionals. However, in addition to aiding in earlier detection, it would create a convenient way to identify potential concerns via telehealth for patients who may face geographic or linguistic barriers to accessing services in their area, explained Zehra Shah, a master’s student in the Department of Computing Science and first author of the paper.
“We can think about triaging patients using this sort of technology based entirely on speech alone,” said Shah.
While the research group has previously looked at language used by Alzheimer’s dementia patients, for this project they examined language-agnostic acoustic and linguistic speech features rather than specific words.
“The original work involved listening to what the person says, understanding what they say, the meaning. That’s an easier computational problem to solve,” said Stroulia. “Now we’re saying, listen to the voice. There are some properties in the way people speak that transcend language. It’s much more powerful than the version of the problem we were solving before,” added Stroulia.
The researchers started with speech characteristics that doctors noted were common in patients with Alzheimer’s dementia. These patients tended to speak more slowly, with more pauses or disruptions in their speech. They typically used shorter words, and often had reduced intelligibility in their speech. Researchers found ways to translate these characteristics into speech features the model could screen for.
Though the researchers focused on English and Greek speakers, “this technology has the potential to be used across different languages,” said Shah. And though the model itself is complex, the eventual user experience for a tool that incorporates it couldn’t be simpler.
“A person talks into the tool, it does an analysis and makes a prediction: either yes, the person has Alzheimer’s, or no they don’t,” said Russ Greiner, a contributor on the paper, professor in the Department of Computing Science, and member of the Neuroscience and Mental Health Institute. That information can then be brought to a healthcare professional to determine the best course of action for the person.
Both Greiner and Stroulia are leading the computational psychiatry research group at the U of A, whose members have crafted similar AI models and tools to detect psychiatric disorders such as PTSD, schizophrenia, depression and bipolar disorder.
“Anything we can do to amplify the clinical processes, inform treatments and manage diseases sooner with less cost is great,” said Stroulia.