Publication
ISCAS 2020
Talk

Simplifying Neural Networks via Look Up Tables and Product of Sums Matrix Factorizations

Download paper

Abstract

We study 2 approaches, called TableNet and ProdSumNet respectively, to simplify the implementation of neural networks. First we look at using Look Up Tables (LUT) to remove the multiplication operations and obviate the need of a multiplier. We compare the different tradeoffs of this approach in terms of accuracy versus LUT size and the number of operations and show that similar performance can be obtained with a comparable memory footprint as a full precision linear classifier, but without the use of any multipliers. Secondly we reduce the number of trainable model parameters by decomposing linear operators in neural networks as a product of sums of simpler linear operators which generalizes recently proposed deep learning architectures such as CNN, KFC, Dilated CNN, etc. We show that good accuracy on MNIST and Fashion MNIST can be obtained using a relatively small number of trainable parameters. In addition, since implementation of the convolutional layer is resource-heavy, we also consider an implementation in the transform domain that obviates the need for convolutional layers. We illustrate the tradeoff of varying the number of trainable variables and the corresponding error rate. As an example, by using this decomposition on a reference CNN architecture for MNIST with over 3 million trainable parameters, we are able to obtain an accuracy of 98.44% using only 3554 trainable parameters. Furthermore, the generality of the framework allows it to be be suitable for general problems unlike CNN which performs best in the image processing or other shift-invariant domains.

Date

28 Sep 2020

Publication

ISCAS 2020

Resources

Share