Representing and Reasoning with Defaults for Learning Agents
Benjamin N. Grosof
AAAI-SS 1993
Feature ranking is a fundamental machine learning task with various applications, including feature selection and decision tree learning. We describe and analyze a new feature ranking method that supports categorical features with a large number of possible values. We show that existing ranking criteria rank a feature according to the training error of a predictor based on the feature. This approach can fail when ranking categorical features with many values. We propose the Ginger ranking criterion, that estimates the generalization error of the predictor associated with the Gini index. We show that for almost all training sets, the Ginger criterion produces an accurate estimation of the true generalization error, regardless of the number of values in a categorical feature. We also address the question of finding the optimal predictor that is based on a single categorical feature. It is shown that the predictor associated with the misclassification error criterion has the minimal expected generalization error. We bound the bias of this predictor with respect to the generalization error of the Bayes optimal predictor, and analyze its concentration properties. We demonstrate the efficiency of our approach for feature selection and for learning decision trees in a series of experiments with synthetic and natural data sets.
Benjamin N. Grosof
AAAI-SS 1993
Susan L. Spraragen
International Conference on Design and Emotion 2010
Gaku Yamamoto, Hideki Tai, et al.
AAMAS 2008
Ohad Shamir, Sivan Sabato, et al.
Theoretical Computer Science