Publication
KDD 2012
Conference paper

Large-scale distributed non-negative sparse coding and sparse dictionary learning

View publication

Abstract

We consider the problem of building compact, unsupervised representations of large, high-dimensional, non-negative data using sparse coding and dictionary learning schemes, with an emphasis on executing the algorithm in a Map-Reduce environment. The proposed algorithms may be seen as parallel optimization procedures for constructing sparse non-negative factorizations of large, sparse matrices. Our approach alternates between a parallel sparse coding phase implemented using greedy or convex (l 1) regularized risk minimization procedures, and a sequential dictionary learning phase where we solve a set of l 0 optimization problems exactly. These two-fold sparsity constraints lead to better statistical performance on text analysis tasks and at the same time make it possible to implement each iteration in a single Map-Reduce job. We detail our implementations and optimizations that lead to the ability to factor matrices with more than 100 million rows and billions of non-zero entries in just a few hours on a small commodity cluster. © 2012 ACM.

Date

Publication

KDD 2012

Authors

Share