ICMLA 2011
Conference paper

L 1 vs. L 2 regularization in text classification when learning from labeled features

View publication


In this paper we study the problem of building document classifiers using labeled features and unlabeled documents, where not all the features are helpful for the process of learning. This is an important setting, since building classifiers using labeled words has been recently shown to require considerably less human labeling effort than building classifiers using labeled documents. We propose the use of Generalized Expectation (GE) criteria combined with a L1 regularization term for learning from labeled features. This lets the feature labels guide model expectation constraints, while approaching feature selection from a regularization perspective. We show that GE criteria combined with L1 regularization consistently outperforms - up to 12% increase in accuracy - the best previously reported results in the literature under the same setting, obtained using L2 regularization. Furthermore, the results obtained with GE criteria and L1 regularizer are competitive to those obtained in the traditional instance-labeling setting, with the same labeling cost. © 2011 IEEE.


01 Dec 2011


ICMLA 2011