About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
Microcomputer Applications
Paper
Learning algorithms for feedforward neural networks based on classical and initial-scaling quasi-Newton methods
Abstract
This paper describes a set of feedforward neural network learning algorithms based on classical quasi-Newton optimization techniques which are demonstrated to be up to two orders of magnitude faster than backward-propagation. Then, through initial scaling of the inverse Hessian approximate, which makes the quasi-Newton algorithms invariant to scaling of the objective function, the learning performance is further improved. Simulations show that initial scaling improves the rate of learning of quasi-Newton-based algorithms by up to 50%. Overall, more than two to three orders of magnitude improvement is achieved compared to backward-propagation. Finally, the best of these learning methods is used in developing a small writer-dependent online handwriting recognizer for digits (0 through 9). The recognizer labels the training data correctly with an accuracy of 96.66%.