LogicNets vs. ULEEN : Comparing two novel high throughput edge ML inference techniques on FPGA
Abstract
With the advent of Internet-of-Things (IoT) and edge computing devices, there has been an increased demand for low power and high-throughput machine learning inference on the edge. However, the trends of ever-increasing model sizes with numerous computations involved makes it increasingly difficult to deploy state-of-the-art models on edge computing devices. Of late, there has been a renewed interest in lookup table (LUT)-based ML models that replace typical weighted-addition operations in artificial neurons with lookup operations. These are well suited for edge FPGAs, both due to their underlying architecture, as well as their potential for low energy consumption. LogicNets and ULEEN are two such LUT-based model architectures, that have claimed to offer high throughput and low energy inferences. These two architectures are extensions of contrasting ideas of Deep Neural Networks and Weightless Neural Networks, and it is difficult to infer a suitable choice among these. In this paper, we compare these, and evaluate them on some high-throughput inference use cases. When evaluated on intrusion detection and physics-experiment classification tasks, our results suggest that ULEEN outperforms LogicNets on hardware and energy requirements making it well suited for edge deployment, albeit at a slight drop in accuracy for some datasets.