Publication
Journal of the ACM
Paper

Low-rank approximation and regression in input sparsity time

View publication

Abstract

We design a new distribution over m × n matrices S so that, for any fixed n × d matrix A of rank r, with probability at least 9/10, ∥SAx∥2 = (1 ±ϵ) ∥Ax∥2 simultaneously for all x ∈ ℝd. Here, m is bounded by a polynomial in rϵ-1, and the parameter ϵ ∈(0, 1]. Such a matrix S is called a subspace embedding. Furthermore, SA can be computed in O(nnz(A)) time, where nnz(A) is the number of nonzero entries of A. This improves over all previous subspace embeddings, for which computing SArequired at least Ω (ndlog d) time. We call these S sparse embedding matrices. Using our sparse embedding matrices, we obtain the fastest known algorithms for overconstrained least-squares regression, low-rank approximation, approximating all leverage scores, and ℓp regression. More specifically, let b be an n × 1 vector, ϵ > 0 a small enough value, and integers κ, p ≥ 1. Our resultsinclude the following.-Regression: The regression problem is to find d×1 vector x′ for which ∥Ax′ - b∥p ≤ (1 + ϵ)minx ∥Ax -∥p. For the Euclidean case p = 2, we obtain an algorithm running in O(nnz(A)) + Õ(d3ϵ -2) time, and another in O(nnz(A) log(1/ϵ))+ Õ(d3 log(1/ϵ)) time. (Here, Õ( f ) = f · logO(1)( f ).) For p ∈ [1, ∞), more generally, we obtain an algorithm running in O(nnz(A) logn) + O(rϵ -1)C time, for a fixed C.-Low-rank approximation: We give an algorithm to obtain a rank-κ matrix Âκ such that ∥A- Âκ∥F ≤ (1 + ϵ)∥A- Aκ∥F, where Aκ is the best rank-κ approximation to A. (That is, Âκ is the output of principal components analysis, produced by a truncated singular value decomposition, useful for latent semantic indexing and many other statistical problems.) Our algorithm runs in O(nnz(A)) + Õ(nκ2ϵ -4 + κ3ϵ -5) time.-Leverage scores: We give an algorithm to estimate the leverage scores of A, up to a constant factor, in O(nnz(A) logn) + Õ(r3) time.

Date

01 Jan 2017

Publication

Journal of the ACM

Authors

Share