Tri-HD: Energy-Efficient On-Chip Learning With In-Memory Hyperdimensional Computing
Abstract
The Internet of Things (IoT) has led to the emergence of big data. Processing this data, specially in learning algorithms, poses a challenge for current embedded computing systems. Brain-inspired hyperdimensional (HD) computing reduces several complex learning operations to simpler bitwise and arithmetic operations. However, it requires the use of large dimensional vectors, hypervectors, further increasing the amount of data to be processed. Processing in-memory (PIM) enables in-place computation which reduces data movement, a major latency bottleneck in conventional systems. In this paper, we propose, an in-memory HD computing architecture that performs HD classification in memory. To the best of authors’ knowledge, is the first ReRAM PIM architecture to implement the complete HD computing-based classification pipeline including encoding, training, re-training, and inference for non-binary data. We also propose a novel distance metric that is PIM-friendly and provides similar application accuracy as the more complex baseline metric. Our proposed architecture is enabled in PIM by fast and energy-efficient in-memory logic operations. We exploit the voltage threshold-based memristors to enable single cycle operations. We also increase the amount of in-memory parallelism in our design by segmenting bitlines using switches. Our evaluation shows that for all applications tested using HD, provides on average 434× (2170×) speedup and consumes 4114× (26019×) less energy as compared to the CPU while running end-to-end HD training (inference). also achieves at least 2.2% higher classification accuracy than the existing PIM-based HD designs.