Direction Aware Positional and Structural Encoding for Directed Graph Neural Networks
Abstract
We propose a novel method for computing joint 2-node structural representations for link prediction in directed graphs. Existing approaches can be grouped into two families. The first group of methods learn structural embeddings of individual nodes in the entire graph through a directed Graph Neural Network (GNNs), and then combine pairs of the encodings to get a representation for the respective node pairs. Methods in the second group compute a representation of the subgraph enclosing the two nodes by employing GNNs initialized with positional encodings and consider these as their potential edge embeddings. Both families of link prediction techniques suffer from considerable shortcomings: The former fail to differentiate two distant nodes with similar neighborhoods; The latter, although provably appropriate for learning edge representations, adopt undirected GNNs, positional encodings, and subgraphs, so the edge direction signal is inevitably lost. Our proposal is also based on the idea of enclosing subgraphs, but the subgraphs are assumed directed, and directed Graph Neural Networks (GNNs) are used to learn their node encodings and initial positional embeddings are direction-aware. Our emphasis on capturing the direction of edges is reflected in superior performance in the link prediction task against baselines with undirected GNNs on symmetrized enclosing subgraphs and existing directed GNNs over a collection of benchmark graph datasets. 1