About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
SDM 2008
Conference paper
Type independent correction of sample selection bias via structural discovery and re-balancing
Abstract
Sample selection bias is a common problem in many real world applications, where training data are obtained under realistic constraints that make them follow a different distribution from the future testing data. For example, in the application of hospital clinical studies, it is common practice to build models from the eligible volunteers as the training data, and then apply the model to the entire populations. Because these volunteers are usually not selected at random, the training set may not be drawn from the same distribution as the test set. Thus, such a dataset suffers from "sample selection bias" or "covariate shift". In the past few years, much work has been proposed to reduce sample selection bias, mainly by statically matching the distribution between training set and test set. But in this paper, we do not explore the different distributions directly. Instead, we propose to discover the natural structure of the target distribution, by which different types of sample selection biases can be evidently observed and then be reduced by generating a new sample set from the structure. In particular, unlabeled data are involved in the new sample set to enhance the ability to minimize sample selection bias. One main advantage of the proposed approach is that it can correct all types of sample selection biases, while most of the previously proposed approaches are designed for some specific types of biases. In experimental studies, we simulate all 3 types of sample selection biases on 17 different classification problems, thus 17×3 biased datasets are used to test the performance of the proposed algorithm. The baseline models include decision tree, naive Bayes, nearest neighbor, and logistic regression. Across all combinations, the increase in accuracy over non-corrected sample set is 30% on average using each baseline model. Copyright © by SIAM.