2 minute read

The future of AI research comes to Albany

A cluster of AIU chips, designed by IBM Research to spur on innovations in AI, will be installed at the University at Albany.

A cluster of AIU chips, designed by IBM Research to spur on innovations in AI, will be installed at the University at Albany.

At the heart of America’s semiconductor heritage sits the University at Albany. Within those walls will soon be a cluster of new processors that may well help uncover a path for the future of AI hardware development. And those processors were designed just down the road by IBM.

Back in October, New York Governor Kathy Hochul announced the Center for Emerging Artificial Intelligence Systems (CEAIS) at the University at Albany, a $20 million collaboration between the university and IBM that aims to power the future of AI research with advanced cloud computing and emerging hardware out of the IBM Research AI Hardware Center, within the Albany NanoTech Complex.

Today, IBM and the University of Albany are announcing the first step on the journey towards powering tomorrow’s AI research. The university is installing a cluster of IBM AIU prototype chips on its Uptown Campus, which will enable students and researchers to run complex AI models in a quest to advance the state of the art in generative AI.

This will be the first installation of the IBM AIU on a university campus. A cluster of AIUs were recently installed in IBM Research’s headquarters at Yorktown Heights, New York, which are being used on internal production workloads for IBM’s watsonx. In early tests, the AIU cluster has shown to have a comparable throughput and accuracy to powerful GPUs — while using far less power.

IBM_AIU_PCIE_05_MedRes.jpgA prototype IBM AIU PCIe card.

IBM first unveiled the AIU late in 2022, and over the last year, researchers worked to create a system designed specifically for handling AI tasks more efficiently than traditional options. The AIU was built from the ground up for AI, and has been shown to be able to handle inference well.

With the AIU, researchers designed chips with their memory and processing units closer together, leading to lower latency and comparably faster chip speeds. The AIU system-on-a-chip (SOC) has 32 individual AI processing cores and contains 23 billion transistors, and was built using 5 nm node process technology. (For reference, IBM Research is currently investigating the path to chips smaller than 1 nm.) The AIU fits easily on a single-slot, half-length PCIe card, meaning it’s perfect for mounting in server racks.

IBM_AIU_Prod_01_MedRes.jpgAn example of an IBM AIU infrastructure deployment in a data center.

The AIU is well suited to AI inferencing tasks and can be used in concert with GPUs for training and building a more efficient generative AI infrastructure. This cluster will be the first installation in the CEAIS since it was unveiled last year. Once installed and running, the AIU cluster will be a valuable tool for students, faculty, and researchers to work on the AI technologies that will power the future.