About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
NeurIPS 2023
Workshop paper
Large Language Model Routing with Benchmark Datasets
Abstract
There is a rapidly growing number of open-source Large Language Models (LLMs) and benchmark datasets to compare them. While some models dominate these benchmarks, there is typically no single model that achieves the best accuracy in all tasks and use cases. In this work, we address the challenge of selecting the best LLM for new tasks out of a collection of models. We propose a new formulation for the problem, in which benchmark datasets are repurposed to learn a ``router'' model for this LLM selection and show that this problem can be reduced to a collection of binary classification tasks. We demonstrate the utility and limitations of learning model routers from various benchmark datasets, where we consistently improve performance upon using any single model for all tasks.