Closed douglasrizzo closed 4 years ago
While it is true that ChebConv
and GCNConv
are motivated by the Eigenbasis of the Laplacian matrix (and are hence restricted to operating on single graphs), their simplifications/modifications allow them to be applied to multiple graphs as well (since they all work in a spatial localized fashion). That is, all the GNN operators in PyG are generally suitable for operating on multiple graphs.
I agree with @rusty1s, just let me share three papers that analyze this question theoretically and empirically supporting that most spectral GCNs work just fine on multiple graphs:
Thanks @rusty1s. I saw that DGL had a different equation for the GCN operator, more in line with a spatial representation of the convolution and not a spectral one. I just wasn't sure with regards to PyG, as its definition of GCNConv is the one from the paper, which comes from the spectral convolution. Also thanks @bknyaz, I'll definitely check the linked papers.
Just to add a little more info, I was pointed to the paper on Message Passing Neural Networks, which has an interpretation of Laplacian-based models as MPNNs in the appendix.
Neural Message Passing for Quantum Chemistry: https://arxiv.org/abs/1704.01212
❓ Questions & Help
Hi, this is more of a theoretical question. The GAT paper mentions that there are some graph convolutional layers that depend on structural information of the graph they will be applied on, which makes it impossible to train them on multiple graphs or apply them on a different graph than the one it was originally trained on. For example, GCN and its predecessors depend on the Laplacian matrix or the adjacency matrix of a graph, which seems to tie it to a single graph.
The GAT paper lists itself and GraphSAGE as models that may be applied to multiple graphs. While I go through each of the layer types looking for answers, I might as well ask: are there other convolutional layers in PyG that are detached from the structure of the graph and can be applied to multiple graphs?