The bionic DBMS is coming, but what will it look like?
Ryan Johnson, Ippokratis Pandis
CIDR 2013
One possible method of obtaining a neural network of an appropriate size for a particular problem is to start with a larger net, then prune it to the desired size. Training and retraining the net under all possible subsets of the set of synapses will result in a prohibitively long learning process; hence some methods that avoid this exhaustive search have been proposed. Here we estimate the sensitivity of the global error (cost) function to the inclusion/exclusion of each synapse in the artificial neural network. We do it by introducing “shadow arrays” that keep track of the incremental changes to the synaptic weights during (a single pass of) backpropagating learning. The synapses are then ordered by decreasing sensitivity numbers so that the network can be efficiently pruned by discarding the last items of the sorted list. Unlike previous approaches this simple procedure does not require a modification of the cost function, does not interfere with the learning process, and demands a negligible computational overhead. © 1990 IEEE
Ryan Johnson, Ippokratis Pandis
CIDR 2013
Saurabh Paul, Christos Boutsidis, et al.
JMLR
Elron Bandel, Yotam Perlitz, et al.
NAACL 2024
Kazuaki Ishizaki, Takeshi Ogasawara, et al.
VEE 2012