Publication
SODA 2013
Conference paper

Lower bounds for adaptive sparse recovery

View publication

Abstract

We give lower bounds for the problem of stable sparse recovery from adaptive linear measurements. In this problem, one would like to estimate a vector x ∈ ℝn from m linear measurements A 1x,..., Amx. One may choose each vector Ai based on A1x,..., Ai-1x, and must output x̂ satisfying ∥x̂ - x∥p ≤ (1 + ε) k-sparse x′min ∥x - x′∥p with probability at least 1-δ > 2/3, for somep p ∈ {1, 2}. For p = 2, it was recently shown that this is possible with m = O(1/εk log log(n/k)), while nonadaptively it requires Θ(1/εk log(n/k)). It is also known that even adaptively, it takes m = Ω(k/ε) for p = 2. For p = 1, there is a non-adaptive upper bound of Õ(1/√εk log n). We show: • For p = 2, m = Ω(log log n). This is tight for k = O(1) and constant ε, and shows that the log log n dependence is correct. • If the measurement vectors are chosen in R "rounds", then m = Ω(R log1/R n). For constant ε, this matches the previously known upper bound up to an O(1) factor in R. • For p = 1, m = Ω(k/(√ε·log k/ε)). This shows that adaptivity cannot improve more than logarithmic factors, providing the analogue of the m = Ω(k/ε) bound for p = 2. Copyright © SIAM.

Date

Publication

SODA 2013

Authors

Share