Foundation Models
Foundation models can be applied across domains and tasks. But there are challenges to scalability, and how AI is applied in specific use cases. At IBM Research, we create new foundation models for business, integrating deep domain expertise, a focus on responsible AI, and a commitment to open-source innovation.
Overview
Modern AI models can learn from millions of examples to help find new solutions to difficult problems. But building new systems tends to take time — and lots of data. The next wave in AI will replace task-specific models with ones that are trained on a broad set of unlabeled data that can be used for different tasks — with minimal fine-tuning. These are called foundation models. They can be the foundation for many applications of the AI model. Using self-supervised learning and fine-tuning, the model can apply information it has learned in general to a specific task.
We believe that foundation models will dramatically accelerate AI adoption in business. Reducing time spent labeling data and programming models will make it much easier for businesses to dive in, allowing more companies to deploy AI in a wider range of mission-critical situations. Our goal is to bring the power of foundation models to every enterprise in a frictionless hybrid-cloud environment. Learn more about foundation models
Our work
From surf to satellites: Campbell Watson is bringing AI to Earth science
Deep DivePeter HessServing customized AI models at scale with LoRA
ResearchKim MartineauAn air traffic controller for LLMs
ExplainerKim MartineauIntroducing Prithvi WxC, a new general-purpose AI model for weather and climate
NewsKim MartineauSimplify your Code LLM solutions using CodeLLM DevKit
Technical noteRangeet Pan, Rahul Krishna, Saurabh Sinha, Raju Pavuluri, and Maja VukovicA toxic language filter built for speed
NewsKim Martineau- See more of our work on Foundation Models
Publications
- 2024
- AGU 2024
- Helen Tamura-Wicks
- Geoffrey Dawson
- et al.
- 2024
- AGU 2024
- Eloisa Bentivegna
- Johannes Schmude
- et al.
- 2024
- AGU 2024
- Daiki Kimura
- Tatsuya Ishikawa
- et al.
- 2024
- AGU 2024
- Vijay E
- Arindam Jati
- et al.
- 2024
- NeurIPS 2024
- William Brandon
- Mayank Mishra
- et al.
- 2024
- NeurIPS 2024