About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
IJCAI 2024
Conference paper
Instance-Level Metalearning for Outlier Detection
Abstract
A machine learning task can be viewed as a sequential pipeline of different algorithmic choices, such as imputation, scaling, feature engineering, algorithm selection, and hyper-parameter setting. The task of automated machine learning is to select this sequence in an automated manner. Such approaches have been successfully designed in supervised settings, but remain challenging for unsupervised tasks such as outlier detection. The main problem in unsupervised settings is that the selection of optimal algorithmic choices requires label-centric feedback, which is not available in these settings. In this paper, we present an instance-level metalearning approach for outlier detection. This meta-model is then used on a new (unlabeled) data set to predict outliers. We show the robustness of our approach on several benchmarks from the OpenML repository. The code and data sets for this work are publicly available at our anonymized GitHub repository at https://anonymous.4open.science/r/t-autood-4C0C.