Miao Guo, Yong Tao Pei, et al.
WCITS 2011
One way to speed up convergence in a large optimization problem is to introduce a smaller, approximate version of the problem at a coarser scale and to alternate between relaxation steps for the fine-scale and coarse-scale problems. We exhibit such an optimization method for neural networks governed by quite general objective functions. At the coarse scale there is a smaller approximating neural net which, like the original net, is nonlinear and has a nonquadratic objective function. The transitions and information flow from fine to coarse scale and back do not disrupt the optimization, and the user need only specify a partition of the original fine-scale variables. Thus the method can be applied easily to many problems and networks. We show positive experimental results including cost comparisons. © 1991 IEEE
Miao Guo, Yong Tao Pei, et al.
WCITS 2011
Masami Akamine, Jitendra Ajmera
IEICE Trans Inf Syst
David W. Jacobs, Daphna Weinshall, et al.
IEEE Transactions on Pattern Analysis and Machine Intelligence
Fearghal O'Donncha, Albert Akhriev, et al.
Big Data 2021