Learning from imaging data to model brain activity
In this article, we introduce VanDEEPol, a hybrid AI/mechanistic model to predict brain activity and structure from imaging data. The model significantly boosts predictive accuracy compared to previous methods. By predicting brain activity from relatively sparse imaging data, VanDEEPol may eventually help to detect medical disorders or design braincomputer interfaces.
In this article, we introduce VanDEEPol, a hybrid AI/mechanistic model to predict brain activity and structure from imaging data. The model significantly boosts predictive accuracy compared to previous methods. By predicting brain activity from relatively sparse imaging data, VanDEEPol may eventually help to detect medical disorders or design braincomputer interfaces.
Intricate interactions among billions of neurons occur constantly in our brains, underlying our thoughts, functions, and behaviors. With sophisticated techniques such as functional magnetic resonance imaging (fMRI) and calcium imaging (CaI), we can map these elaborate connections with unprecedented detail.
Algorithmic models that describe wholebrain activity based on these maps could yield clues into how our brains work, both normally and in disease states.
Such models could also inform the design of neurotechnological devices such as braincomputer interfaces. But brain activity is extraordinarily complex and challenging to model.
Imaging data include multiple variables, and neural activity follows nonlinear dynamics, which may not be captured by autoregressive models that predict future states by assuming a linear relationship to past states. Generic nonlinear models—such as deep recurrent neural networks (RNNs), which recognize the sequential nature of data and use patterns to predict future states—require large volumes of training data, which aren’t often available for brain imaging, and their results can be difficult to interpret.
As a result, building accurate, predictive models of brain activity from imaging data remains a practical problem. Motivated by this challenge, we developed a hybrid approach, called VanDEEPol,^{1} that combines nonlinear van der Pol (VDP) oscillators and an RNN to accurately predict brain activity based on imaging data. VDP accurately fits imaging data of different types and from different species and can generalize to unseen data. It identifies anatomically relevant interactions between brain areas, providing insights into their functional connectivity, and can supply an unlimited amount of simulated data to augment real imaging data for training an RNN.
We found that our hybrid model—VanDEEPol, combining VDP with deep learning—has a much better predictive performance than either component alone. Our model could potentially facilitate accurate predictions of brain activity that have practical applications in medical diagnosis and neurotechnology.
We start with a set of CaI data from zebrafishes. To obtain functionally relevant information from these data, we perform singular value decomposition (SVD) analysis to identify the top six spatial and temporal components. Spatial components represent brain subsystems, while temporal components reflect changes in brain activity within those subsystems.
Brain activity is generated by coupled dynamics of neurons, which can be captured by VDPlike equations, and calcium dynamics in the brain are largely driven by transmembrane voltage and voltagedependent calcium channels. We therefore model calcium dynamics in each SVD spatial component as a differential equation with a voltagelike variable (activity) and a recoverylike variable (excitability). Because imaging data only include information about activity, we must estimate the excitability variable.
Parameter estimation for systems of differential equations is a nontrivial problem of significant interest across the fields of physics, biology, and computational research. The simple random sampling approach we tried first did not fit the training data well, so we developed a more advanced procedure. It combines stochastic search with deterministic optimization in an alternating fashion to find the hidden variable (excitability) from the observed one (activity).
Our stochastic search method is essentially a random walk in both the parameter and the hidden state space: a candidate random step is accepted if it improves a fitness function or discarded otherwise. In the deterministic optimization portion, we use nonlinear Kalman smoothing to refine the estimate of the previous state in light of later observations. This procedure greatly improves the fit of the model to our training data.
Applying VDP with this estimation procedure to the zebrafish CaI data set, as well as rat CaI and human fMRI data, the bestperforming runs achieve correlations from 0.82 to 0.94 for the top six SVD components in each data set.
Accurately fitting training data isn’t enough to prove a model’s validity. The ultimate validation of a model’s ability to accurately capture the dynamics producing the data is generalization, or how well it predicts future brain states based on unseen data drawn from the same data distribution. In this case, we assess generalization ability by evaluating shortterm and longterm predictive accuracy of future brain activity.
We compare the performance of VDP with that of a vector autoregressive (VAR) model and an RNN, two standard approaches to timeseries forecasting. Because we lacked a large amount of training data (as is often required for training RNNs) we explored two different techniques for data augmentation as a means to improve RNN performance.
One simple and frequently used method is to employ noisy versions of the training data. As an alternative, we use VDP to simulate (unlimited) additional training data for the RNN. Combining VDPbased data augmentation with RNN is the hybrid model we call VanDEEPol (van der Pol equations combined with deep learning).
We train a model on 100 consecutive points of a data segment (5 for zebrafish, 8 for rat and 10 for human), and then use the next 20 points (zebrafish), 176 points (rat), or 60 points (human) for testing/prediction, using existing datasets with for each species. The shortterm prediction task is to predict sequentially the first nine points of a test segment (adjacent to the training segment) in a recursive manner.
Given the last six points of the training data, we predict the first point in the test segment, shift the sixpoint input window forward by one point to incorporate the recently predicted point, use the new window to predict the next point, and so on, never using the observed points from the test data.
We use a similar task to evaluate longterm predictive accuracy. This task involves all future test points and starts with a window of nine test points comprising the last six points of the training data plus shortterm predictions of the first three test points, again never using the observed points from the test data. We do not include VDP in this task. Although a VDP model can make shortterm predictions for time points that immediately follow the training interval, since it has an estimate of the hidden state variable at the last training point, it cannot be straightforwardly applied to longterm prediction for arbitrary intervals in the remote future, because the hidden state would be unknown.
We compute the median Pearson correlation—the correlation coefficient between two sets of data—between the true and predicted time series and the median rootmeanssquare error (RMSE), over all data segments, all considered SVD components, and all testing intervals (for the shortterm prediction task, there is only one).
For the zebrafish and rat data sets, VanDEEPol often significantly outperforms the other methods; in the worst cases, it performs equally well. VDP performs well in terms of correlation but is outperformed by the RNN in the second half of the shortterm prediction window for the zebrafish data set. However, VDP has a relatively high RMSE, which could be due to the considerable initial distances between modeled and true time series: despite the model’s overall good fit, the error for the last point of the training set (initial condition for VDP predictions) might be significant.
The difference in performance between the RNN pretrained with noisy data and VanDEEPol suggests that VDP successfully learns some meaningful information about the underling dynamics (reflected in its high correlation predictive performance), which is transferred to the RNN and improves its performance.
For shortterm predictions on the human data set, all methods perform similarly in terms of correlation, and VAR outperforms the others in terms of RMSE up to three steps ahead, after which point the performance of all methods degrades severely. This is likely attributable to the nature of the fMRI signal. For longterm predictions, however, VanDEEPol clearly performs better than VAR by a large margin, which again suggests that VDP captures additional features of the underlying dynamics that, combined with the RNN, improve even longterm predictions.
The results demonstrate that a VDP model can be fit to multivariate brain imaging data with a high degree of accuracy and can achieve a reasonable predictive performance in timeseries forecasting. Furthermore, VDP can serve as a generative model for data augmentation to boost the predictive performance of deep RNNs.
VanDEEPol significantly outperforms other methods in predictive accuracy.
Still, we envision several promising refinements to further improve its performance or extend its applicability. These could include incorporating better parameter estimation techniques, richer and alternative models of brain dynamics, and broader spectra and longer timescales of neural activity.
In addition to predicting brain activity, we can use the VDP model to discover new information about its underlying dynamics. To do this, we developed a method to infer directional connectivity from the spatial SVD components of the imaging data. Applied to the zebrafish data set, this connectivity analysis identifies both excitatory and inhibitory connections between brain areas that are consistent with known anatomical demarcations and functional relationships.
This degree of interpretability is not seen with other models. For example, connectivity analysis of data derived from VAR yields much less informative results, failing to distinguish between excitatory and inhibitory networks, which have fundamentally different structure and function. As such, VDP seems to offer a novel approach to discovering functional connectivity among brain circuits. It would be useful to benchmark this method against other recent datadriven approaches to further define its potential to deliver mechanistic insights.
VDP can be an effective, interpretable tool for brainimaging data analysis. Its ability to accurately predict future brain activity based on imaging data could help physicians identify medical disorders or anticipate disease progression. It could also inform the development of technologies to infer people’s intentions from their brain signals, translate them into commands, and relay the instructions to output devices that accomplish those intentions. Such braincomputer interfaces could be valuable in restoring function compromised by injury, disability, or age.
In addition, the generative ability of VDP can significantly boost the predictive accuracy of deep learning methods on relatively small brainimaging data sets, opening new avenues for data augmentation in spatiotemporal medical imaging. And because the connections among brain areas that VDP captures are anatomically and functionally relevant, it could be used to discover or characterize new interactions that expand our understanding of how the brain functions.
This work contributes to both basic and applied dimensions of neuroscience, deriving fundamental insights into how our brains work and utilizing that understanding to improve our quality of life.
References

Abrevaya, G., Dumas, G., Aravkin, A., et al. Learning Brain Dynamics With Coupled LowDimensional Nonlinear Oscillators and Deep Recurrent Networks. Neural Comput 2021; 33 (8): 2087–2127. ↩