30 Mar 2023
Technical note
2 minute read

Disentangling visual attributes with neuro-vector-symbolic architectures, in-memory computing, and device noise

Deep Neural Networks (DNNs) are powerful AI tools that can extract useful representations from unstructured data. This extraction can be used for tasks like classification or object detection. One problem is the representations the DNN has “learned” become entangled in complex ways that may be suitable for a specific task but make it difficult to generalize to even slightly different situations. A promising approach is to disentangle representations in which various attributes of knowledge are represented separately and can then be flexibly recombined to represent novel experiences.

Having a mechanism that can disentangle representations is as such essential for generalization. You could cast the entanglement and disentanglement of neurally encoded information as multiplication and factorization of large holographic vectors representing neural activities.

In a paper published in Nature Nanotechnology today, we present an efficient compute engine for disentangling data-driven holographic representations by exploiting the intrinsic stochasticity associated with analog in-memory computing based on nanoscale memristive devices.

Our team has been researching emerging computing paradigms such as neuro-vector-symbolic architectures and implementing such models using analog in-memory computing (AIMC) hardware. Inspired by vector-symbolic architectures, an elegant dynamical system was dubbed a “resonator network” which can iteratively solve a particular factorization problem where factors assume holographic distributed representations. Although effective, the dynamics of resonator networks make them vulnerable to an infinite loop of search over a subset of incorrect estimates — a phenomenon known as limit cycles. Moreover, a linear activation of the attention values led to slow convergence and low accuracy overall.

In this work, we proposed to enrich the resonator networks on AIMC hardware, which can naturally harness the intrinsic device noise to prevent the resonator networks from getting stuck in the limit cycles, and to amplify the faster convergence speed by supplying non-linear activation functions. We discovered the breakthrough finding that the inevitable stochasticity present in AIMC hardware is not a curse but in fact a blessing that could pave the way for solving at least five orders of magnitude larger combinatorial problems which were previously unsolvable within the given constraints. Our enhanced in-memory factorizer also reduces the spatial and time complexity associated with the factorization problem.

We could demonstrate, on a real AIMC chip based on phase-change memory, developed within the IBM Research AI Hardware Center, how factorization of a problem space of size 16 million can be solved in real time.

The proposed in-memory factorizer is capable of working with noisy product vectors. We have already demonstrated one application where it can be used to disentangle the perceptual representations by coupling it with a convolutional neural network:

In general, factorizing the product vectors constructed by binding (the Hadamard product) of randomly drawn vectors that exhibit no correlational structure forms a hard combinatorial search problem. We have shown that our in-memory factorizer is an efficient engine to solve one such instance of a hard combinatorial search problem.