Closed davidfstein closed 1 year ago
Hi @davidfstein ,
GIN and GCN apply jumping knowledge by default. You can replace the embed_dim
with embed_dim*n_layer
in GraphCL()
to make them work together. You can also refer to Cell [8] in this example .
Please let us know if you have any further questions! Thank you!
Thanks! Just to check my understanding, in the jumping knowledge paradigm, rather than combining each nodes representation with the aggregated representations from its neighbors after each layer, the aggregated representations are concatenated at the end, thus we get n_layers * embed_dim output? And this would also rely on self-loops being added right?
When trying to run GraphCL with GIN or GCN rather than ResGCN an error is produced.
For example
RuntimeError: mat1 and mat2 shapes cannot be multiplied (4096x384 and 128x128)
The GIN and GCN produce embed dim * n_layer sized output. But the projection head appears to expect embed dim sized input. Is it possible to use the GIN and GCN fro the graph-level tasks?