Conference paper

Variance Reduction Can Improve Trade-Off in Multi-Objective Learning

View publication


Many machine learning problems today have multiple objective functions, which are often tackled by the multi-objective learning (MOL) framework. Albeit many encouraging results are obtained by MOL algorithms, a recent theoretical study revealed that these gradient-based MOL methods (e.g., MGDA, CAGrad) all reflect an inherent trade-off between optimization convergence speeds and conflict-avoidance abilities. To this end, we develop an improved stochastic variance-reduced multi-objective gradient correction method for MOL, achieving the O($\epsilon^{−1.5}$) sample complexity. In addition, our proposed method simultaneously improves the theoretical guarantees for conflict avoidance and convergence rate compared to prior stochastic gradient-based MOL methods in the non-convex setting. We further validate the effectiveness of the proposed method empirically using popular multi-task learning (MTL) benchmarks.