Chen-Yong Cher, Michael Gschwind
VEE 2008
One possible method of obtaining a neural network of an appropriate size for a particular problem is to start with a larger net, then prune it to the desired size. Training and retraining the net under all possible subsets of the set of synapses will result in a prohibitively long learning process; hence some methods that avoid this exhaustive search have been proposed. Here we estimate the sensitivity of the global error (cost) function to the inclusion/exclusion of each synapse in the artificial neural network. We do it by introducing “shadow arrays” that keep track of the incremental changes to the synaptic weights during (a single pass of) backpropagating learning. The synapses are then ordered by decreasing sensitivity numbers so that the network can be efficiently pruned by discarding the last items of the sorted list. Unlike previous approaches this simple procedure does not require a modification of the cost function, does not interfere with the learning process, and demands a negligible computational overhead. © 1990 IEEE
Chen-Yong Cher, Michael Gschwind
VEE 2008
Gaku Yamamoto, Hideki Tai, et al.
AAMAS 2008
Robert Farrell, Rajarshi Das, et al.
AAAI-SS 2010
Alain Vaucher, Philippe Schwaller, et al.
AMLD EPFL 2022