Laura Bégon-Lours, Mattia Halter, et al.
MRS Spring Meeting 2023
This paper presents a tensor-based algorithm that leverages a hardware accelerator for inferencing decision-tree-based machine learning models. The algorithm has been integrated in a public software library and is demonstrated on an IBM z16 server, using the Telum processor with the Integrated Accelerator for AI. We describe the architecture and implementation of the algorithm and present experimental results that demonstrate its superior runtime performance compared with popular CPU-based machine learning inference implementations.
Laura Bégon-Lours, Mattia Halter, et al.
MRS Spring Meeting 2023
Geoffrey Burr, Sidney Tsai, et al.
CICC 2025
Ying Zhou, Gi-Joon Nam, et al.
DAC 2023
Pal Varga, Georgios Kathareios, et al.
CNSM 2017