About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ICASSP 2016
Conference paper
Beyond L2-loss functions for learning sparse models
Abstract
In sparse learning, the squared Euclidean distance is a popular choice for measuring the approximation quality. However, the use of other forms of parametrized loss functions, including asymmetric losses, has generated research interest. In this paper, we perform sparse learning using a broad class of smooth piecewise linear quadratic (PLQ) loss functions, including robust and asymmetric losses that are adaptable to many real-world scenarios. The proposed framework also supports heterogeneous data modeling by allowing different PLQ penalties for different blocks of residual vectors (split-PLQ). We demonstrate the impact of the proposed sparse learning in image recovery, and apply the proposed split-PLQ loss approach to tag refinement for image annotation and retrieval.