AIHWKIT-Lightning: A Scalable HW-Aware Training Toolkit for Analog In-Memory Computing
Abstract
We introduce AIHWKIT-Lightning, a new toolkit designed for efficient and scalable hardware-aware training of large neural networks deployed on Analog In-Memory Computing (AIMC)-based hardware. The toolkit prioritizes speed and ease of use, addressing the limitations of existing frameworks in training Large Language Models (LLMs) with billions of parameters. AIHWKIT-Lightning leverages dedicated GPU kernels and a streamlined implementation, achieving up to 3.7x faster training at lower memory consumption compared to state-of-the-art toolkits. Benefiting from the increased scalability, we demonstrate near-iso-accuracy on the GLUE benchmark using a RoBERTa model trained on 11B tokens. The toolkit is publicly available at github.com/IBM/aihwkit-lightning.