Publication
MASS 2019
Conference paper

Using graphical models as explanations in deep neural networks

View publication

Abstract

Despite its remarkable success, deep learning currently typically operates as a black-box. Instead, can models produce explicit reasons to explain their decisions? To address that question, we propose to exploit probabilistic graphical models which are declarative representations of our understanding of the world (e.g., what the relevant variables are, and how they interact with each other), and are commonly used to perform causal inference. More specifically, we propose a novel architecture called Deep Explainable Bayesian Networks whose main idea consists in concatenating a deep network with a Bayesian network, and to rely on the latter one to provide the explanations. We conduct extensive experiments on classical image, and text classification tasks. First, the results show that deep explainable Bayesian networks can achieve comparable accuracy than models that are trained on the same datasets but without producing explanations. Second, the experiments show promising results: The average accuracy of the explanation ranges from 68.3% to 84.8%.

Date

01 Nov 2019

Publication

MASS 2019

Authors

Share