Publication
Journal of Alzheimer's Disease
Paper

Combining Multimodal Behavioral Data of Gait, Speech, and Drawing for Classification of Alzheimer's Disease and Mild Cognitive Impairment

Download paper

Abstract

Background: Gait, speech, and drawing behaviors have been shown to be sensitive to the diagnosis of Alzheimer's disease (AD) and mild cognitive impairment (MCI). However, previous studies focused on only analyzing individual behavioral modalities, although these studies suggested that each of these modalities may capture different profiles of cognitive impairments associated with AD. Objective: We aimed to investigate if combining behavioral data of gait, speech, and drawing can improve classification performance compared with the use of individual modality and if each of these behavioral data can be associated with different cognitive and clinical measures for the diagnosis of AD and MCI. Methods: Behavioral data of gait, speech, and drawing were acquired from 118 AD, MCI, and cognitively normal (CN) participants. Results: Combining all three behavioral modalities achieved 93.0% accuracy for classifying AD, MCI, and CN, and only 81.9% when using the best individual behavioral modality. Each of these behavioral modalities was statistically significantly associated with different cognitive and clinical measures for diagnosing AD and MCI. Conclusion: Our findings indicate that these behaviors provide different and complementary information about cognitive impairments such that classification of AD and MCI is superior to using either in isolation.

Authors’ notes

As the world’s older adult population increases, the number of people living with dementia is increasing rapidly, becoming a serious health and social problem. AD is the most common form of dementia and may account for an estimated 60-80% of cases. Although no cure of AD is available, there is an urgent need for the early identification of AD, especially at its early stages, e.g., MCI, because modifiable risk factors and interventions that could prevent or delay progression have been suggested. In this context, there is growing interest in developing accurate and easy-to-perform screening tools for the early detection of AD and MCI.

In a new paper published in Journal of Alzheimer’s Disease, we present a new screening model for detecting AD and MCI by using the combinations of daily behavioral data for gait, speech, and drawing. Unlike previous models using each behavior in isolation, our model effectively exploits different and complementary information of multimodal behavioral data about cognitive impairments associated with AD. Consequently, with our new dataset collected from 118 older adults, our model could achieve 93% accuracy for classifying AD, MCI, and CN (100% accuracy for AD vs. CN; 89.5% accuracy for MCI vs. CN).

The IBM Research approach and results

Combining multimodal behavioral data is the focus of this work. Gait, speech, and drawing analyses have garnered increasing attention as easy-to-perform screening tools for AD and MCI. The characteristics of these behaviors individually have been shown to change in AD and MCI patients and be associated with cognitive impairments in specific domains. Although most existing research investigated each behavior in isolation, the heterogeneity of the brain regions involved in their execution as well as of their behavior-cognitive relationships suggests that each of these behaviors may capture different and complementary profiles of cognitive impairments associated with AD. Therefore, we hypothesized that gait, speech, and drawing behaviors that provide complementary information about cognitive impairments can be combined to provide higher accuracy for AD and MCI detection.

Multimodal analysis for detecting AD.
Overview of our screening model for classifying AD, MCI, and CN. (A) Behavioral modalities and examples of feature categories. (B) Examples of changes in gait features. (C) Three-class classification performance resulted from 10-fold cross validation. (D) Association of behavioral features with cognitive and clinical measures for diagnosing AD and MCI. The edges represent statistically significant associations based on multiple linear regression controlling for age and sex.

Our model first extracts behavioral features from gait, speech, and drawing data in a comprehensive fashion (Fig. A). For example, from gait data of normal walking, it extracts gait features categorized into five domains: pace (e.g., gait speed), rhythm (e.g., step time), variability (e.g., step length variability), left-right asymmetry (e.g., the difference between left and right step time), and postural control (e.g., mediolateral fluctuation). In a similar way, speech data is characterized by acoustic, prosodic, and linguistic features; and drawing data is characterized by kinematics, writing pressure, and time-related features. In our study, we showed that all three behavioral modalities had features that significantly differentiated AD or MCI patients from CN participants. For example, AD or MCI patients walk with lower gait speed, shorter step length, longer step time, greater variability, greater left-right asymmetry, and greater mediolateral fluctuation (Fig. B).

By effectively combining these behavioral features, our model classifies individuals to three diagnosis categories of AD, MCI, and CN. The model using two modalities could achieve better classification performance than that using a single modality, and combining all three behavioral modalities could consistently achieve more accurate classifications than combining any two modalities (Fig. C). Our model improved accuracy by 11.1% compared with a model based on the previous studies using single behavioral modalities, and it achieved 93% accuracy for classifying AD, MCI, and CN (AuROC of 0.98). We also showed that the enhancement of classification performance by combining multimodal behaviors results from their complementary nature regarding cognitive and clinical measures for the diagnosis of AD and MCI. Regression analyses showed that multimodal behavioral data of gait, speech, and drawing have different and complementary associations with cognitive impairments (Fig. D).

Implications for early detection of AD

Our results have important implications for developing screening tools in both clinical settings and free-living setting. First, in terms of clinical implications, these behavioral data may be easily acquired in routine clinical practice. In fact, the speech and drawing data can be collected during standard cognitive tests, and a gait test can be easily incorporated into the clinical practice. Thus, our results suggest that combining such multimodal behavioral data can help clinicians accurately detect patients with AD and MCI. This would also be useful as an easy-to-perform screening tool to select individuals who should be further examined with biomarkers in clinical practice and clinical treatment trials.

Second, our study illustrates the potential value of daily behavioral data for screening AD and MCI. When comparing with other approaches using computerized assessment tools including digitalized cognitive tests, our approach focusing on behavioral data may help promote future efforts towards the development of continuous and passive monitoring tools for the early detection of AD from data collected in everyday life. Further, in our study, we showed that our model performance was significantly higher than an AuROC value of 0.86 calculated as a baseline value by using scores of the most standard cognitive tests (i.e., mini mental state examination). Thus, our approach combining multimodal daily behavioral data may help reliably identify AD and MCI as an easy-to-perform screening tool.

Future work

There are two main exciting directions to extend the scope of application based on our approach. The first direction is to investigate whether combing multiple behavioral modalities can improve estimation of neuropathological changes. Because validated diagnostic biomarkers, such as those using cerebrospinal fluid or PET imaging, are either invasive or expensive, there is an urgent need for developing non-invasive and inexpensive screening tools allowing early detection of neuropathological changes. The second direction is to investigate whether our multimodal behavioral approach can reliably identify AD and MCI from these behavioral data collected in free-living situations. This would allow for regular and timely screening and help lower the barrier to early detection of AD.

Details

For more details, please see the original paper.[1] This is a piece of our long-term joint effort with universities towards early detection of AD and other kinds of cognitive, physical, or mental problems through analysis on daily behaviors. Other results so far include detecting AD by analyzing conversational contents during phone calls,[2] detecting mental fatigue by analyzing eye movements while watching videos,[3] as well as analyzing conversational speech with AI chatbots for detecting MCI,[4] assessing loneliness,[5], and predicting future driving accident risks.[6]

Bibliography

[1]: Yamada Y, Shinkawa K, Kobayashi M, Caggiano V, Nemoto M, Nemoto K, et al. Combining Multimodal Behavioral Data of Gait, Speech, and Drawing for Classification of Alzheimer’s Disease and Mild Cognitive Impairment. Journal of Alzheimer’s Disease. 2021 Jan 1;84(1):315–327.
[2]: Yamada Y, Shinkawa K, Shimmei K. Atypical Repetition in Daily Conversation on Different Days for Detecting Alzheimer Disease: Evaluation of Phone-Call Data From Regular Monitoring Service. JMIR Ment Health. 2020 Jan 12;7(1).
[3]: Yamada Y, Kobayashi M. Detecting mental fatigue from eye-tracking data gathered while watching video: Evaluation in younger and older adults. Artif Intell Med. 2018 Sep 18;91:39–48.
[4]: Yamada Y, Shinkawa K, Kobayashi M, Nishimura M, Nemoto M, Tsukada E, et al. Tablet-Based Automatic Assessment for Early Detection of Alzheimer’s Disease Using Speech Responses to Daily Life Questions. Front Digit Health. 2021 Mar 17;3:653904.
[5]: Yamada Y, Shinkawa K, Nemoto M, Arai T. Automatic Assessment of Loneliness in Older Adults Using Speech Analysis on Responses to Daily Life Questions. Front Psychiatry. 2021 Dec;12:712251.
[6]: Yamada Y, Shinkawa K, Kobayashi M, Takagi H, Nemoto M, Nemoto K, et al. Using Speech Data From Interactions With a Voice Assistant to Predict the Risk of Future Accidents for Older Drivers: Prospective Cohort Study. J Med Internet Res. 2021 Apr 8;23(4).