Publication
WWW 2018
Conference paper

AdaError: An adaptive learning rate method for matrix approximation-based collaborative filtering

View publication

Abstract

Gradient-based learning methods such as stochastic gradient descent are widely used in matrix approximation-based collaborative filtering algorithms to train recommendation models based on observed user-item ratings. One major difficulty in existing gradient-based learning methods is determining proper learning rates, since model convergence would be inaccurate or very slow if the learning rate is too large or too small, respectively. This paper proposes AdaError, an adaptive learning rate method for matrix approximation-based collaborative filtering. AdaError eliminates the need of manually tuning the learning rates by adaptively adjusting the learning rates based on the noisiness level of user-item ratings, using smaller learning rates for noisy ratings so as to reduce their impact on the learned models. Our theoretical and empirical analysis shows that AdaError can improve the generalization performance of the learned models. Experimental studies on the MovieLens and Netflix datasets also demonstrate that AdaError outperforms state-of-the-art adaptive learning rate methods in matrix approximation-based collaborative filtering. Furthermore, by applying AdaError to the standard matrix approximation method, we can achieve statistically significant improvements over state-of-the-art collaborative filtering methods in both rating prediction accuracy and top-N recommendation accuracy.

Date

10 Apr 2018

Publication

WWW 2018

Share