About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
IEEE Data Eng. Bull.
Paper
A Survey on Explainability of Graph Neural Networks
Abstract
Graph neural networks (GNNs) are powerful graph-based deep-learning models that have been applied to solve a wide range of real-world applications, including natural language processing, drug discovery, and recommendation systems. However, combining feature information and combinatorial graph structures has led to complex non-linear GNN models. Consequently, this has increased the challenges of understanding the working mechanism of the GNN as well as its prediction accuracy. In recent years, researchers have started to build different explainer methods to understand GNNs. Explainable GNNs not only improve their security but also enhance trust in their recommendations. In this work, we provide a comprehensive survey of different explainer approaches involving GNNs on various graph related tasks. We categorize the literature based on the proposed explainer methods, the types of explanations, and the applications.