About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
STOC 2011
Conference paper
Subspace embeddings for the L1-norm with applications
Abstract
We show there is a distribution over linear mappings R : ℓ1n → ℓ1O(d log d), such that with arbitrarily large constant probability, for any fixed d-dimensional subspace L, for all x ∈ L we have ∥x∥1 ≤ ∥Rx∥1 = O(d log d)∥x∥1. This provides the first analogue of the ubiquitous subspace Johnson-Lindenstrauss embedding for the ℓ1-norm. Importantly, the target dimension and distortion are independent of the ambient dimension n. We give several applications of this result. First, we give a faster algorithm for computing well-conditioned bases. Our algorithm is simple, avoiding the linear programming machinery required of previous algorithms. We also give faster algorithms for least absolute deviation regression and ℓ1-norm best fit hyperplane problems, as well as the first single pass streaming algorithms with low space for these problems. These results are motivated by practical problems in image analysis, spam detection, and tatistics, where the ℓ1-norm is used in studies where outliers may be safely and effectively ignored. This is because the ℓ1-norm is more robust to outliers than the ℓ2-norm. © 2011 ACM.