FIG. 1: (a) In biological synapses, during the process of synaptic integration, dendritic spikes can enhance the impact of synchronous inputs from dendrites belonging to the same dendritic tree. Excitatory postsynaptic potentials (EPSPs) with the same amplitude but different locations in dendritic tree may lead to different responses. For example, dendrites i, iv and viii send similar signals, but only the i and iv contribute in driving an AP, since their respective dendritic trees receive sufficient further excitation from other connected dendrites. In the top image, the postsynaptic neuron (dark blue) receives inputs mostly from dendrites generating strong EPSPs (orange) and only few generating weak EPSPs (yellow). The bottom postsynaptic neuron (light blue) receives most inputs from weak-EPSPs dendrites. Because of such dendritic distribution, the dark blue neuron exhibits higher firing probability and thus its importance is higher with respect to the light blue neuron. (b) The structure of an FCNN is much simpler than that of biological neurons with presynaptic connections arranged in dendritic trees. However, analogously to panel (a), the node importance of each node arises from the distribution of the weight strength within each layer. The blue node has a high node importance since most of its incoming synapses are strong. Conversely, the light blue node importance is lower, since the presynaptic population exhibits a weaker mean strength.
Additionally, biologically plausible training schemes such as feedback alignment greatly benefit from our optimization method, therefore hardware devices that cannot support backpropagation, but only feedback alignment, are a promising area for application of our work.
Furthermore, GRAPES improves the performance of spiking neural networks (SNNs). SNNs offer an energy-efficient alternative for implementing deep learning applications; however, they still lag behind artificial neural networks (ANNs) in terms of accuracy. Our work paves the way for biologically inspired algorithms to narrow the gap between the performance of SNNs and ANNs, enabling applications in the rapidly growing field of neuromorphic chips.
Conclusion
Our results demonstrate that GRAPES not only provides substantial improvements in the performance of deep artificial and spiking neural networks, but it also mitigates the accuracy degradation due to catastrophic forgetting.