Publication
NeurIPS 2018
Conference paper

Neural interaction transparency (NIT): Disentangling learned interactions for improved interpretability

Abstract

Neural networks are known to model statistical interactions, but they entangle the interactions at intermediate hidden layers for shared representation learning. We propose a framework, Neural Interaction Transparency (NIT), that disentangles the shared learning across different interactions to obtain their intrinsic lower-order and interpretable structure. This is done through a novel regularizer that directly penalizes interaction order. We show that disentangling interactions reduces a feedforward neural network to a generalized additive model with interactions, which can lead to transparent models that perform comparably to the state-of-the-art models. NIT is also flexible and efficient; it can learn generalized additive models with maximum K-order interactions by training only O(1) models.

Date

02 Dec 2018

Publication

NeurIPS 2018

Authors

Share