Ranking-based evaluation of regression models
Saharon Rosset, Claudia Perlich, et al.
ICDM 2005
We consider the generic regularized optimization problem β̂(λ) = arg minβ L(y, Xβ) + λJ(β). Efron, Hastie, Johnstone and Tibshirani [Ann. Statist. 32 (2004) 407-499] have shown that for the LASSO - that is, if L is squared error loss and J(β) = ||β||l is the ℓl norm of β - the optimal coefficient path is piecewise linear, that is, ∂β(λ)/∂λ is piecewise constant. We derive a general characterization of the properties of (loss L, penalty J) pairs which give piecewise linear coefficient paths. Such pairs allow for efficient generation of the full regularized coefficient paths. We investigate the nature of efficient path following algorithms which arise. We use our results to suggest robust versions of the LASSO for regression and classification, and to develop new, efficient algorithms for existing problems in the literature, including Mammen and van de Geer's locally adaptive regression splines. © Institute of Mathematical Statistics, 2007.
Saharon Rosset, Claudia Perlich, et al.
ICDM 2005
Aurélie C. Lozano, Naoki Abe, et al.
Bioinformatics
Claudia Perlich, Maytal Saar-Tsechansky, et al.
IEEE Intelligent Systems
Trevor Hastie, Saharon Rosset, et al.
JMLR