About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
MobiHoc 2023
Conference paper
PRECISION: Decentralized Constrained Min-Max Learning with Low Communication and Sample Complexities
Abstract
Recently, min-max optimization problems have received increasing attention due to their wide range of applications in machine learning (ML). However, most existing min-max solution techniques are either single-machine or distributed algorithms coordinated by a central server. In this paper, we focus on the decentralized min-max optimization for learning with domain constraints, where multiple agents collectively solve a nonconvex-strongly-concave min-max saddle point problem without coordination from any server. Decentralized min-max optimization problems with domain constraints underpins many important ML applications, including multi-agent ML fairness assurance, and policy evaluations in multi-agent reinforcement learning. We propose an algorithm called PRECISION (proximal gradient-tracking and stochastic recursive variance reduction) that enjoys a convergence rate of O(1/T), where T is the maximum number of iterations. To further reduce sample complexity, we propose PRECISION+ with an adaptive batch size technique. We show that the fast O(1/T) convergence of PRECISION and PRECISION+ to an ϵ-stationary point imply O(ϵ-2) communication complexity and [EQUATION] sample complexity, where m is the number of agents and n is the size of dataset at each agent. To our knowledge, this is the first work that achieves O(ϵ-2) in both sample and communication complexities in decentralized min-max learning with domain constraints. Our experiments also corroborate the theoretical results.