About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
International Journal of Systems Science
Paper
Stochastic sensitivity analysis method for neural network learning
Abstract
A new, efficient algorithm is developed for the sensitivity analysis of a class of continuous-time recurrent neural networks with additive noise signals. The algorithm is based on the stochastic sensitivity analysis method using the variational approach, and formal expressions are obtained for the functional derivative sensitivity coefficients. The present algorithm uses only the internal states and noise signals to compute the gradient information needed in the gradient descent method, where the evaluation of derivatives is not necessary. In particular, it does not require the solution of adjoint equations of the back-propagation type. Thus, the algorithm has the potential for efficiently learning the network weights with significantly fewer computations. The effectiveness of the algorithm in a statistical sense is shown, and the method is applied to the familiar layered network. © 1995 Taylor & Francis Group, LLC.