Publication
STOC 2008
Conference paper

Stateless distributed gradient descent for positive linear programs

View publication

Abstract

We develop a framework of distributed and stateless solutions for packing and covering linear programs, which are solved by multiple agents operating in a cooperative but uncoordinated manner. Our model has a separate "agent" controlling each variable and an agent is allowed to read-off the current values only of those constraints in which it has non-zero coefficients. This is a natural model for many distributed applications like flow control, maximum bipartite matching, and dominating sets. The most appealing feature of our algorithms is their simplicity and polylogarithmic convergence. For the packing LP max{c · x | Ax ≤ b, x ≥ 0}, the algorithm associates a dual variable yi = [1/ε(Aix/bi - 1)] for each constraint i and each agent j iteratively increases (resp. decreases) xj multiplicatively if AjTy is too small (resp. large) as compared to cj. Our algorithm starting from a feasible solution, always maintains feasibility, and computes a (1 + ε) approximation in poly(In(mn'Amax/ε) rounds. Here m and n are number of rows and columns of A and Amax, also known as the "width" of the LP, is the ratio of maximum and minimum non-zero entries Aij/(bicj). Similar algorithm works for the covering LP min{b . y | A⊤y ≥ c, y ≥ 0} as well. While exponential dual variables are used in several packing/ covering LP algorithms before [25, 9, 13, 12. 26, 16], this is the first algorithm which is both stateless and has polylogarithmic convergence. Our algorithms can be thought of as applying distributed gradient descent/ascent on a carefully chosen potential. Our analysis differs from those of previous multiplicative update based algorithms and argues that while the current solution is far away from optimality, the potential function decreases/increases by a significant factor. ©Copyright 2008 ACM.

Date

Publication

STOC 2008

Authors

Share