What’s Next in AI
is foundation models at scale
AI is revolutionizing how business gets done, but popular models can be costly and are often proprietary. At IBM Research, we’re designing powerful new foundation models and generative AI systems with trust and transparency at their core. We’re working to drastically lower the barrier to entry for AI development, and to do that, we’re committed to an open-source approach to enterprise AI.
Introducing watsonx.ai
Explore our next-generation enterprise platform, powered by IBM's full technology stack and designed to enable enterprises to train, tune, and deploy AI models.
Our work
An air traffic controller for LLMs
ExplainerKim MartineauWhy we’re teaching LLMs to forget things
ExplainerKim MartineauNew algorithms open possibilities for training AI models on analog chips
ResearchPeter HessFor LLMs, IBM’s NorthPole chip overcomes the tradeoff between speed and efficiency
ResearchPeter HessHow memory augmentation can improve large language model efficiency and flexibility
ResearchPeter HessIntroducing Prithvi WxC, a new general-purpose AI model for weather and climate
NewsKim Martineau- See more of our work on AI
MIT-IBM Watson AI Lab
We’re partnering with the sharpest minds at MIT to advance AI research in areas like healthcare, security, and finance.
Publication collections
Topics
- Adversarial Robustness and Privacy
- AI for Asset Management
- AI for Business Automation
- AI for Code
- AI for Supply Chain
- AI Testing
- Automated AI
- Causality
- Computer Vision
- Conversational AI
- Explainable AI
- Fairness, Accountability, Transparency
- Foundation Models
- Generative AI
- Granite
- Human-Centered AI
- Knowledge and Reasoning
- Machine Learning
- Natural Language Processing
- Neuro-symbolic AI
- Speech
- Trustworthy AI
- Trustworthy Generation
- Uncertainty Quantification