Publication
NeurIPS 2020
Conference paper

Hybrid Variance-Reduced SGD Algorithms For Minimax Problems with Nonconvex-Linear Function

Download paper

Abstract

We develop a novel and single-loop variance-reduced algorithm to solve a class of stochastic nonconvex-convex minimax problems involving a nonconvex-linear objective function, which has various applications in different fields such as ma- chine learning and robust optimization. This problem class has several compu- tational challenges due to its nonsmoothness, nonconvexity, nonlinearity, and non-separability of the objective functions. Our approach relies on a new combi- nation of recent ideas, including smoothing and hybrid biased variance-reduced techniques. Our algorithm and its variants can achieve O ( T − 2 / 3 ) -convergence rate and the best-known oracle complexity under standard assumptions, where T is the iteration counter. They have several computational advantages compared to exist- ing methods such as simple to implement and less parameter tuning requirements. They can also work with both single sample or mini-batch on derivative estimators, and with constant or diminishing step-sizes. We demonstrate the benefits of our algorithms over existing methods through two numerical examples, including a nonsmooth and nonconvex-non-strongly concave minimax model.

Date

06 Dec 2020

Publication

NeurIPS 2020

Authors

Topics

Resources

Share