About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
I-CARE 2014
Conference paper
Decisions under drift: Adapting binary decision thresholds to drifts in test distribution
Abstract
Most predictive models built for binary decision problems compute a real valued score as an intermediate step and then apply a threshold on this score to make a final decision. Conventionally, the threshold is chosen which optimizes a desired performance metric (such as accuracy, F-score, precision@k, recall@k, etc.) on the training set. However very often in practice it so happens that the same threshold when applied to a test set, results in a sub-optimal performance because of drift in test distribution. In this work we propose a method that adaptively changes the threshold such that the optimal performance achieved on the training set is maintained. The method is completely unsupervised and is based on fitting a parametric mixture model to the test scores and choosing the threshold that optimizes a performance metric based on the corresponding parametric approximation.