About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ICNN 1997
Conference paper
A note on the effective number of parameters in nonlinear learning systems
Abstract
Moody's notion of effective number of parameters in a nonlinear learning system has been used for studying the generalization ability of feedforward neural networks. It is more meaningful than the number of free parameters in a nonlinear learning system because the former explains explicitly how the generalization error is related to the expected training set error. In this paper, we extend Moody's model to include a more general noise model. We show that the addition of noise in both sampling points in test data and observations increases the deviation of the expected test set mean-squared-error (MSE) from the expected training set MSE, and also increases the effective number of parameters. Our extension makes less restrictive assumptions about the data generation process than in the original Moody's notion. Monte Carlo experiments have been conducted to verify our extension, and to demonstrate the role of the weight-decay regularization in improving the generalization ability of feedforward networks. © 1997 IEEE.