About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ICASSP 2024
Conference paper
Variance Reduction Can Improve Trade-Off in Multi-Objective Learning
Abstract
Many machine learning problems today have multiple objective functions, which are often tackled by the multi-objective learning (MOL) framework. Albeit many encouraging results are obtained by MOL algorithms, a recent theoretical study revealed that these gradient-based MOL methods (e.g., MGDA, CAGrad) all reflect an inherent trade-off between optimization convergence speeds and conflict-avoidance abilities. To this end, we develop an improved stochastic variance-reduced multi-objective gradient correction method for MOL, achieving the O() sample complexity. In addition, our proposed method simultaneously improves the theoretical guarantees for conflict avoidance and convergence rate compared to prior stochastic gradient-based MOL methods in the non-convex setting. We further validate the effectiveness of the proposed method empirically using popular multi-task learning (MTL) benchmarks.