AI’s potential to streamline more and more tasks isn’t without a price tag. To satisfy AI’s thirst for energy and ever-complex models, we have to keep developing ever more efficient AI hardware.
Enter our latest innovation – AI Hardware Composer for analog AI hardware.
The tool, unveiled at the two-year anniversary of the IBM Research AI Hardware Center, enables one to master and accelerate the AI hardware technology to power more sustainable AI models. It’s one of many upcoming developments of the AI Hardware Center, launched in 2019 to innovate across materials, devices, architecture and algorithms. The Center’s aim is to improve AI hardware compute efficiency by 2.5 times every year for a decade – meaning a one thousand times better performance by 2029.
While by itself, the Composer doesn’t produce a hardware performance benefit, it can certainly lead to it indirectly. With an access to IBM’s open-source analog libraries and an easy to use interface, it allows both coding novices and experienced developers to create neural networks and tune analog devices to create accurate AI models. Through education and experimentation, we hope the Composer will accelerate analog AI exploration and discovery, and help inspire application developers to create new software based on tomorrow’s AI hardware needs.
With the Composer and AI Hardware Kit (AIHWKIT), users can explore how this next-generation analog hardware can support their workloads. For example, AI researchers can test our neural network optimization tools to design analog hardware-aware models. Some universities, for example AI Hardware Center member RPI, are already designing coursework for students to learn how to exploit such models. The students can experiment with them on the AiMOS and AiMOSx supercomputers and contribute to the open-source community.
Hardware and hybrid cloud
As the AI Hardware Center launched, IBM partnered with six founding members — Samsung, Synopsys, Tokyo Electron Limited, Applied Materials, SUNY Polytechnic Institute and Rensselaer Polytechnic Institute (RPI) — to create what’s next in AI computing. Fast forward two years, and we continue to exceed our roadmap of 2.5 times annual improvement in AI hardware compute efficiency. For example, our recent results of the third generation of digital AI core, unveiled at ISSCC 2021, detail how we doubled our performance efficiency projection for training and outperformed our roadmap projection for inference by six fold.
Our membership has since grown to 16 industry and academic members, all aiming to advance the AI hardware ecosystem. For instance, Tokyo Electron (TEL) has been both a collaborator and consumer across all of the AI Hardware Center’s research programs. Its team has been working on the AiMOS supercomputer at RPI, using IBM Research AI software to speed up their own modeling and simulation work.
Meanwhile Cadence, one of our newest members, is focused on electronic design automation and computational software to boost design productivity. Its work is aimed at ensuring that design automation tools can handle new technology requirements for our AI cores, including aggressively balancing the performance and energy trade-offs of our inference-optimized cores. The company brings more than 30 years of design expertise to help the Center drive the multi-core integration of our future designs.
By partnering with our members on the entire AI hardware and software ecosystem, we aim to develop robust capabilities that can be deployed across hybrid cloud infrastructures. In the year ahead, we aim to bring our digital AI cores to market. We’re also working with IBM AI Hardware Center member Red Hat to create an OpenShift-compatible software stack to allow seamless integration of our AI hardware into the hybrid cloud.
Another key area in our journey is AI model evolution. With each new model, we evaluate how well our hardware and software work, and in some cases update models to be “hardware-aware” – deriving performance and power efficiencies from the hardware without sacrificing accuracy. To address this challenge, we’re working with our AutoAI colleagues at IBM Research to automatize model adaptation to faster assess and adapt to the ever-shifting AI landscape.