New research presents empirical evidence of tablet-based automatic assessments of patients using speech analysis to detect mild cognitive impairment.
There’s no cure for Alzheimer’s disease. But the earlier it’s diagnosed, the more chances there are to delay its progression.
Our joint team of researchers from IBM and University of Tsukuba has developed an AI model that could help detect the onset of the mild cognitive impairment (MCI), the transitional stage between normal aging and dementia — by asking older people typical daily questions. In a new paper published in Frontiers in Digital Health journal,1 we present the first empirical evidence of tablet-based automatic assessments of patients using speech analysis — successfully detecting mild cognitive impairment (MCI), the transitional stage between normal aging and dementia.
Unlike previous studies, our AI-based model uses speech responses to daily life questions using a smartphone or a tablet app. Such questions could be as simple as inquiring someone about their mood, plans for the day, physical condition or yesterday’s dinner. Earlier studies mostly focused on analyzing speech responses during cognitive tests, such as asking a patient to “count down from 925 by threes” or “describe this picture in as much detail as possible.”
We found that the detection accuracy of our tests based on answers to simple daily life questions data was comparable to the results of cognitive tests — detecting MCI signs with an accuracy of nearly 90 percent. This means that such an AI could be embedded into smart speakers or similar type of commercially available smart home technology for health monitoring, to help detect early changes in cognitive health through daily usage.
Our results are particularly promising because conducting cognitive tests is much more burdensome for participants. It forces them to follow complicated instructions and often induces a heavy cognitive load, preventing frequent assessments for timely and early detection of Alzheimer’s. Relying on more casual speech data though could allow much more frequent assessments with less operational and cognitive costs.
For our analysis, we first collected speech responses from 76 Japanese seniors — including people with MCI. We then analyzed multiple types of speech features, such as pitch and how often people would pause when they talked.
We knew that capturing subtle cognitive differences based on speech in casual conversations with low cognitive load would be tricky. The differences between MCI and healthy people for each speech feature tend to be smaller than those for responses to cognitive tests.
We overcame this challenge by combined use of responses to multiple questions designed to capture changes in memory and executive functions, in addition to language function, associated with MCI and dementia. For example, the AI-based app would ask: “What did you eat for dinner yesterday?” A senior with MCI could respond: “I had Japanese noodles with tempura — tempura of shrimps, radish, and mushroom.”
It may seem that there is no problem with this response. But the AI could capture differences in paralinguistic features such as pitch, pause, and others related to acoustic characteristics of voice. We discovered that compared to cognitive tests, daily life questions could elicit weaker but statistically discernible differences in speech features associated with MCI. Our AI managed to detect MCI with high accuracy of 86.4 percent, statistically comparable to the model using responses to cognitive tests.
- Yamada, Y. et al. Tablet-Based Automatic Assessment for Early Detection of Alzheimer’s Disease Using Speech Responses to Daily Life Questions. Front. Digit. Health 3, (2021).↩
- Eyigoz, E., Mathur, S., Santamaria, M., Cecchi, G. & Naylor, M. Linguistic markers predict onset of Alzheimer’s disease. EClinicalMedicine 28, 100583 (2020).↩