snap-stanford / ogb

Benchmark datasets, data loaders, and evaluators for graph machine learning
https://ogb.stanford.edu
MIT License
1.93k stars 397 forks source link

role of "root_emb" layer in GCN examples for graph property prediction task #93

Closed ChangminWu closed 3 years ago

ChangminWu commented 3 years ago

Hi,

I am a bit confused by your implementation of GCN in the examples for the graph property prediction task. For example, in ogb/examples/graphproppred/mol/conv.py, you wrote GCN message-passing as

self.propagate(edge_index, x=x, edge_attr = edge_embedding, norm=norm) + F.relu(x + self.root_emb.weight) * 1./deg.view(-1,1)

I don't quite understand the role of self.root_emb in this line of code. To me, this layer seems to be of the same function as the bias in the previous linear transform of x... Could you please explain in more details why this embedding layer is useful here? Is it a way to improve the performance of GCN?

Thank you!

weihua916 commented 3 years ago

Hi! You can think of root_emb as the self-edge feature (connecting from the node to the same node). We just replaced edge_attr with root_emb here.

I have not tested other modeling choices, but I do not think this is so essential to the model performance.

ChangminWu commented 3 years ago

Now I see... Thanks a lot for the explanation!