16 Dec 2021
Research
4 minute read

IBM’s CodeFlare significantly cuts the time to automate transfer learning tasks for foundation models

With CodeFlare, the time it takes to train an AI system can drop from weeks to a single day in a hybrid cloud environment.

CodeFlare drastically reduces time to set up, run, scale machine-learning experiments

With CodeFlare, the time it takes to train an AI system can drop from weeks to a single day in a hybrid cloud environment.

Earlier this year, we presented CodeFlare at the Ray Summit. Our open-source tool promised data scientists that it would reduce the time it took to set up, run, and scale machine learning (ML) models. The goal of the tool was to reduce the amount of time scientists spent configuring their experiments, allowing them to spend more time diving into the results of their research. And back then, users applied our framework to analyze and optimize hundreds of thousands of pipelines for training ML models — with CodeFlare cutting the training time for many from hours to minutes.

Just six months later, we’ve refined CodeFlare so that it can take on foundation models,1 and it’s now available as open-source software. This effectively takes CodeFlare from an exploratory tool for data science researchers to a tool that can automate AI and ML workflows on IBM’s hybrid cloud.

Faster, automated foundation model training

Businesses use foundation models for a multitude of tasks. For example, a financial-services company could customize a foundation model just for sentiment analysis. Today, this takes an incredible amount of time, sometimes even weeks, to gather and train an AI model on the right body of data, combined with multiple upstream and downstream tasks — each often written by different, siloed teams.

Our goal is to make the generation of downstream models ‘one-click’ easy for data scientists.

CodeFlare simplifies this entire process using a Python-based interface for these foundation model “pipelines” — a sequence of multiple, often heterogeneous steps — making it simpler to integrate, parallelize, and share data. These tasks of preprocessing, validating, and adapting foundation models for business use cases are now fully automated on a hybrid cloud platform.

How CodeFlare works

In the sentiment analysis case, CodeFlare starts off by cleaning up the input data, including de-duplication, and removing unsafe or biased content. Then, it tunes a foundation model for all of the specific tasks needed for the organization’s sentiment analysis. With just a few lines of code, a data scientist can operationalize hundreds of such pipelines and automate these tasks whenever they need to make any changes.

Our goal is to make generating downstream models as easy for data scientists as a single mouse click. Today, organizations can use CodeFlare on their own datasets on our hybrid cloud platform, without the data leaving their environment. Financial sentiment analysis is a motivating use case for our team: Internally, we’re using it to show business users how CodeFlare automates our validation of foundation language models on various state-of-the-art benchmarks.

This is just one of the many use cases CodeFlare could help simplify. You can download and experiment with CodeFlare on your own processes today.

The authors would like to acknowledge the entire CodeFlare team involved in advancing this work, including Edward Epstein, Linsong Chu, Masayasu Muraoka, Marquita Ellis, Rong Zhang.

References

  1. Awasthy, P., Bhattacharjee, B., Kender, J., Florian, R. Predictive Model Selection for Transfer Learning in Sequence Labeling Tasks. Association for Computational Linguistics. p. 113-118. (2020).