Publication
Inverse Problems and Imaging
Paper

Optimal estimation of l 1-regularization prior from a regularized empirical bayesian risk standpoint

View publication

Abstract

We address the problem of prior matrix estimation for the solu- tion of l 1-regularized ill-posed inverse problems. From a Bayesian viewpoint, we show that such a matrix can be regarded as an influence matrix in a multi- variate l 1-Laplace density function. Assuming a training set is given, the prior matrix design problem is cast as a maximum likelihood term with an additional sparsity-inducing term. This formulation results in an unconstrained yet non- convex optimization problem. Memory requirements as well as computation of the nonlinear, nonsmooth sub-gradient equations are prohibitive for large-scale problems. Thus, we introduce an iterative algorithm to design efficient priors for such large problems. We further demonstrate that the solutions of ill-posed inverse problems by incorporation of l 1-regularization using the learned prior matrix perform generally better than commonly used regularization techniques where the prior matrix is chosen a-priori. © 2012 American Institute of Mathematical Sciences.

Date

01 Aug 2012

Publication

Inverse Problems and Imaging

Authors

Share