Graph Neural Networks with Learnable Structural and Positional Representations

Authors:

  • Vijay Prakash Dwivedi
  • Anh Tuan Luu
  • Thomas Laurent
  • Yoshua Bengio
  • Xavier Bresson

Problem statement: A major issue with arbitrary graphs is the absence of canonical positional information of nodes, which decreases the representation power of GNNs to distinguish e.g. isomorphic nodes and other graph symmetries.

High-level Idea: An approach to tackle this issue is to introduce Positional Encoding (PE) of nodes, and inject it into the input layer, like in Transformers.

Contributions:

  • Introduce a method to Learn Structural and Positional Encodings (LSPE)
  • Show this gives an improvement in a variety of tasks, with both sparse and dense graphs

LPSE

“Structural encodings” are the normal embeddings, marked here by h. The positional encodings are not static and combined with the input, but rather are computed at each layer. Then, in the node UPDATE function, the positional encodings (for that layer) are concatenated.

p_0 are initialized through some method, like “Laplacian PE” (LapPE) or “Random Walk PE” (RWPE). LapPE requires you to flip the sign during training so the model can learn the sign-flipping invariance. It turns out that RWPE performs better because, they claim, it doesn’t have to contend with learning the invariance with respect to the sign ambiguity of LapPE.

Random Notes/Quotes

  • “Another PE candidate for graphs can be Laplacian Eigenvectors (Dwivedi et al., 2020; Dwivedi & Bresson, 2021) as they form a meaningful local coordinate system, while preserving the global graph structure. However, there exists sign ambiguity in such PE as eigenvectors are defined up to ±1, leading to 2k number of possible sign values when selecting k eigenvectors which a network needs to learn.”

Questions

  • In what ways is this different than merely having learnable node features?
  • Is RWPE inductive, whereas LapPE is transductive?