On-line variance minimization in O(n2) per trial?
Elad Hazan, Satyen Kale, et al.
COLT 2010
Satyen Kale from IBM T.J. Watson Research Center shares his views on the commentary entitled 'online optimization with gradual variations'. The commentary is a result of the combination of two papers where both explore whether it is possible to obtain regret bounds in various online learning settings that depend on some notion of variation in the costs instead of the number of period. These papers give similar algorithms for this problem and obtain very similar results, despite the analysis being different. A group of researchers gives a unified framework to obtain such regret bounds for three specific cases of convex optimization (OCO), such as online linear optimization, online learning with experts, and online expconcave optimization, while another group of researchers gives two algorithms obtaining such regret bounds for general online OCO.
Elad Hazan, Satyen Kale, et al.
COLT 2010
Kaiyuan Zhang, Guanhong Tao, et al.
ICLR 2023
Eunho Yang, Aurelie C. Lozano, et al.
ICML 2014
Guojing Cong, David A. Bader
Journal of Parallel and Distributed Computing