KerGNNs: Interpretable Graph Neural Networks with Graph Kernels
Abstract
Graph kernels are historically the most widely-used techniques for graph classification tasks. However, these methods suffer from limited performance because of the hand-crafted combinatorial features of graphs. During recent years, graph neural networks (GNNs) have become state-of-the-art methods in downstream graph-related tasks due to the superior performance. Most GNNs are based on Message Passing Neural Network (MPNN) frameworks. However, recent studies show MPNNs can not exceed the power of the Weisfeiler-Lehman (WL) algorithm in graph isomorphism test. To address the limitations of existing graph kernel and GNN methods, in this paper, we propose a novel GNN framework, termed \textit{Kernel Graph Neural Networks} (KerGNNs), which integrates graph kernels into the message passing process of GNNs. Inspired by convolution filters in convolutional neural networks (CNNs), KerGNNs adopt trainable hidden graphs as graph filters and compute the graph kernels between graph filters and local subgraphs of the input graph. In addition, we show that MPNNs can be viewed as special cases of KerGNNs. We apply KerGNNs to multiple graph prediction tasks and use cross-validation to make fair comparisons with benchmarks. We show that our method achieves competitive performance compared with existing state-of-the-art methods, demonstrating the potential to increase the representation ability of GNNs. We also show that the trained graph filters in KerGNNs can reveal the local graph structures of the dataset, which significantly improves the model interpretability compared with conventional GNN models.