What’s Next in AI
is foundation models at scale
AI is revolutionizing how business gets done, but popular models can be costly and are often proprietary. At IBM Research, we’re designing powerful new foundation models and generative AI systems with trust and transparency at their core. We’re working to drastically lower the barrier to entry for AI development, and to do that, we’re committed to an open-source approach to enterprise AI.
Our work
The 2024 IBM Research annual letter
Deep DiveSriram Raghavan, Mukesh Khare, and Jay GambettaHow analog in-memory computing could power the AI models of tomorrow
ResearchPeter HessIntroducing the GneissWeb dataset
Technical noteHajar Emami Gohari, Swanand Kadhe, Syed Yousaf Shah, and Bishwaranjan BhattacharjeeA benchmark for evaluating conversational RAG
ResearchKim MartineauHow low-precision computing boosts efficiency — without hurting accuracy
ExplainerPeter HessMeet IBM’s new family of AI models for materials discovery
NewsKim Martineau- See more of our work on AI
Tools + code
Download Granite on Hugging Face
Explore our family of language, code, time series, and geospatial models.
View project →Try Granite for Free
Chat with a Granite model and learn how it can be used across a variety of applications.
View project →Read Granite Documentation
Learn how to access, run, and start using the Granite family of AI models.
View project →
Publication collections
Topics
- Adversarial Robustness and Privacy
- AI for Asset Management
- AI for Business Automation
- AI for Code
- AI for Supply Chain
- AI Testing
- Automated AI
- Causality
- Computer Vision
- Conversational AI
- Explainable AI
- Fairness, Accountability, Transparency
- Foundation Models
- Generative AI
- Granite
- Human-Centered AI
- Knowledge and Reasoning
- Machine Learning
- Natural Language Processing
- Neuro-symbolic AI
- Speech
- Trustworthy AI
- Trustworthy Generation
- Uncertainty Quantification