Deep learning acceleration in 14nm CMOS compatible ReRAM array: device, material and algorithm co-optimization
Abstract
We show for the first time in hardware that in contrast to conventional stochastic gradient descent (SGD), our modified SGD algorithm (TTv2) together with a co-optimized ReRAM material achieves respectable accuracy (98%) on reduced MNIST classification (0 & 1), approaching a floating point (FP) baseline. To extrapolate these insights towards larger DNN training workloads in simulations, we establish an analog switching test sequence and extract key device statistics from 6T1R ReRAM arrays (up to 2k devices) built on a 14nm CMOS baseline. With this, we find that for larger DNN workloads, device and algorithm co-optimization shows dramatic improvements in comparison to standard SGD and baseline ReRAM. The gap to the reference floating-point accuracy across various tested DNNs indicates that further material and algorithmic optimizations are still needed. This work shows a pathway for scalable in-memory deep learning training using ReRAM crossbar arrays.