Publication
ICLR 2020
Conference paper

Towards Better Understanding of Adaptive Gradient Algorithms in Generative Adversarial Nets

Download paper

Abstract

Adaptive gradient algorithms perform gradient-based updates using the history of gradients and are ubiquitous in training deep neural networks. While adaptive gradient methods theory is well understood for minimization problems, the underlying factors driving their empirical success in min-max problems such as GANs remain unclear. In this paper, we aim at bridging this gap from both theoretical and empirical perspectives. First, we analyze a variant of Optimistic Stochastic Gradient (OSG) proposed in (Daskalakis et al., 2017) for solving a class of nonconvex non-concave min-max problem and establish O( −4 ) complexity for finding -first-order stationary point, in which the algorithm only requires invoking one stochastic first-order oracle while enjoying state-of-the-art iteration complexity achieved by stochastic extragradient method by (Iusem et al., 2017). Then we propose an adaptive variant of OSG named Optimistic Adagrad (OAdagrad) and reveal an improved adaptive complexity O   − 2 1−α  , where α characterizes the growth rate of the cumulative stochastic gradient and 0 ≤ α ≤ 1/2. To the best of our knowledge, this is the first work for establishing adaptive complexity in nonconvex non-concave min-max optimization. Empirically, our experiments show that indeed adaptive gradient algorithms outperform their non-adaptive counterparts in GAN training. Moreover, this observation can be explained by the slow growth rate of the cumulative stochastic gradient, as observed empirically.

Date

26 Apr 2020

Publication

ICLR 2020