About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
Applied Mathematics Letters
Paper
Accelerating neural net dynamics by boundary layer methods
Abstract
A boundary layer method for accelerating the solution of the differential equations representing the dynamics of an analog relaxation neural net in a high gain limit is presented. The inverse of the gain parameter in an analog neuron's transfer function is used as a small parameter, in terms of which the net dynamics may be separated into two time scales. This separation leads to economies in the numerical treatment of the associated differential equations, i.e., the acceleration in question. Illustrative computations are presented. © 1993.