About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
COAP
Paper
Stochastic inexact augmented Lagrangian method for nonconvex expectation constrained optimization
Abstract
Many real-world problems not only have complicated nonconvex functional constraints but also use a large number of data points. This motivates the design of efficient stochastic methods on finite-sum or expectation constrained problems. In this paper, we design and analyze stochastic inexact augmented Lagrangian methods (Stoc-iALM) to solve problems involving a nonconvex composite (i.e. smooth + nonsmooth) objective and nonconvex smooth functional constraints. We adopt the standard iALM framework and design a subroutine by using the momentum-based variance-reduced proximal stochastic gradient method (PStorm) and a postprocessing step. Under certain regularity conditions (assumed also in existing works), to reach an ε -KKT point in expectation, we establish an oracle complexity result of O(ε- 5) , which is better than the best-known O(ε- 6) result. Numerical experiments on the fairness constrained problem and the Neyman–Pearson classification problem with real data demonstrate that our proposed method outperforms an existing method with the previously best-known complexity result.