About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
AutoML-Conf 2022
Conference paper
On the Optimality Gap of Warm-Started Hyperparameter Optimization
Abstract
We study the general framework of warm-started hyperparameter optimization (HPO) where we have some source datasets (tasks) on which we have already performed HPO, and we wish to leverage the results of these HPO to warm-start the HPO on an unseen target dataset and perform few-shot HPO. Various meta-learning schemes have been proposed over the last decade (and more) for this problem. In this paper, we theoretically analyse the optimality gap of the hyperparameter obtained via such warm-started few-shot HPO, and provide novel results for multiple existing meta-learning schemes. We show how these results allow us identify situations where certain schemes have advantage over others.