Publication
INFORMS 2021
Talk

Shuffling Gradient-Based Methods

View publication

Abstract

We combine two advanced ideas widely used in optimization for machine learning: shuffling strategy and momentum technique to develop a novel method with momentum for finite-sum minimization problems. We establish that our algorithm achieves a state-of-the-art convergence rate for any shuffling strategy under standard assumptions. In particular, if a random shuffling strategy is used, we can further improve our convergence rate by a fraction of the data size. When the shuffling strategy is fixed, we develop another new algorithm that is similar to existing momentum methods. We prove the same convergence rate of this algorithm under the L-smoothness and bounded gradient assumptions. We demonstrate our algorithms via numerical simulations on standard datasets and compare them with existing shuffling methods.

Date

24 Oct 2021

Publication

INFORMS 2021

Authors

Topics

Share