About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Conference paper
Theoretical analysis of a class of randomized regularization methods
Abstract
The convergence behavior of traditional learning algorithms can be analyzed in the VC theoretical framework. Recently, many researchers have been interested in a class of randomized learning algorithms including the Gibbs algorithm from statistical mechanics. However, no successful theory concerning the generalization behavior of these randomized learning algorithms have been established previously. In order to fully understand the behavior of these randomized estimators, we shall compare them with regularization schemes for deterministic estimators. Furthermore, we present theoretical analysis for such algorithms which leads to rigorous convergence bounds.