AI Hardware
The world is generating reams of data each day, and the AI systems built to make sense of it all constantly need faster and more robust hardware. We’re developing new devices and architectures to support the tremendous processing power AI requires to realize its full potential.
Our work
- NewsPeter Hess
Expanding AI model training and inference for the open-source community
NewsPeter HessBuilding the IBM Spyre Accelerator
ReleasePeter HessIBM researchers win prestigious European grants
NewsPeter Hess and Mike MurphyHow the IBM Research AI Hardware Center is building tomorrow’s processors
Deep DivePeter HessReimagining storage for the generative AI era
ResearchTalia Gershon, Mike Murphy, Swaminathan Sundararaman, Haris Pozidis, and Khanh Ngo- See more of our work on AI Hardware
Projects
Neuro-inspired AI to optimize learning and computing efficiency of next generation AI.
Analog AI: A New Design Paradigm
At IBM Research we’re developing a new class of Analog AI hardware, purpose built to help innovators realize the promise of the next stages of AI. Journey inside this unique architecture.
Publications
Transfer Learning on Edge Using 14nm CMOS-compatible ReRAM Array and Analog In-memory Training Algorithm
- Takashi Ando
- Omobayode Fagbohungbe
- et al.
- 2025
- IEDM 2025
In-memory Computing Approaches for Large Language Model Acceleration
- 2025
- IEDM 2025
Co-packaged optics module with single mode polymer waveguide
- Akihiro Horibe
- Yoichi Taira
- et al.
- 2025
- IEDM 2025
Analog Foundation Models
- Julian Büchel
- Iason Chalas
- et al.
- 2025
- NeurIPS 2025
Phase Change Memory Materials and Structural Design for Analog In-Memory Computing Applications
- Matthew BrightSky
- Amlan Majumdar
- et al.
- 2025
- MRS Fall Meeting 2025
Max-Cut Solving with Spiking Boltzmann Machine on Phase-Change Memory-Based Neuromorphic Hardware
- Yu Gyeong Kang
- Masatoshi Ishii
- et al.
- 2025
- MRS Fall Meeting 2025
AI Hardware Center
The IBM Research AI Hardware Center is a global research collaboration hub dedicated to creating the next generation of systems and chips for AI workloads, as well as expanding joint research efforts across technology, architecture, and algorithms.