snap-stanford / GraphGym

Platform for designing and evaluating Graph Neural Networks (GNN)
Other
1.69k stars 185 forks source link

Enhance GeneralConvLayer #26

Closed joneswong closed 3 years ago

joneswong commented 3 years ago
  1. added gnn.flow argument to enable message passing in either direction for directed graphs.
  2. thus, also enabled GeneralConvLayer to normalize the adjacency matrix of directed graphs according to the discussion
  3. When there is no self message and adj has not been normalized, that is, gnn.self_msg=="none" andgnn.normalizeadj==False`, no self loop would be added. Thus, in the current implementation, the message passing procedure ignored the node's embedding at previous layer, i.e., $$h{v}^{(l-1)}$$, which is inconsistent with the convention.

I tested this pr:

cfg.gnn.normalize_adj = True#False
cfg.gnn.self_msg = 'none'
cfg.gnn.agg = 'add'
cfg.gnn.flow = "target_to_source"
conv = GeneralConvLayer(1, 1, bias=False)
conv.weight.data = torch.ones((1, 1))
x = torch.Tensor([[1], [2], [3], [4], [5]]).to(torch.float32)
edge_index = torch.Tensor([[0, 1, 2, 2], [2, 2, 0, 4]]).to(torch.int64)
#edge_index = to_undirected(edge_index, x.size(0))
out = conv(x, edge_index)
print(out)

the output matches what we expected:

tensor([[1.5000],
        [3.0000],
        [4.0000],
        [4.0000],
        [2.5000]], grad_fn=<ScatterAddBackward>)

if we changed to cfg.gnn.flow="source_to_target" as usual, it results in:

tensor([[1.5000],
        [1.0000],
        [2.5000],
        [4.0000],
        [6.0000]], grad_fn=<ScatterAddBackward>)

which is also what we expected.

JiaxuanYou commented 3 years ago

Thanks for the contribution. Could you add this contribution to https://github.com/snap-stanford/GraphGym/tree/master/graphgym/contrib/layer ? There you can call it generaledgeconv_v2 or whatever name you like. I may want to keep the current generalconv as it is to ensure the reproducibility of existing results.

joneswong commented 3 years ago

updated accordingly