Eric J. Fluhr, Joshua Friedrich, et al.
ISSCC 2014
A multi-TOPS AI core is presented for acceleration of deep learning training and inference in systems from edge devices to data centers. With a programmable architecture and custom ISA, this engine achieves >90% sustained utilization across the range of neural network topologies by employing a dataflow architecture and an on-chip scratchpad hierarchy. Compute precision is optimized at 16b floating point (fp 16) for high model accuracy in training and inference as well as 1b/2b (bi-nary/ternary) integer for aggressive inference performance. At 1.5 GHz, the AI core prototype achieves 1.5 TFLOPS fp 16, 12 TOPS ternary, or 24 TOPS binary peak performance in 14nm CMOS.
Eric J. Fluhr, Joshua Friedrich, et al.
ISSCC 2014
Sae Kyu Lee, Ankur Agrawal, et al.
IEEE JSSC
Matt Ziegler, Hung-Yi Liu, et al.
ISLPED 2016
Ankur Agrawal, Chia-Yu Chen, et al.
DAC 2017