Open dlysisus opened 4 years ago
Thnaks for this request. I've added it to the PyG 1.7.0 roadmap.
Hi @rusty1s , thanks a lot for such a great framework! I'm thinking about implementing this particular feature. As far as I can see it should be added to the base MessagePassing class (passing edge_attr around), so it might affect descendant classes. Is this correct, or there might be a better way? Also I have a question about GGNN - there is a separate weight for each step https://github.com/rusty1s/pytorch_geometric/blob/master/torch_geometric/nn/conv/gated_graph_conv.py#L49 , is it supposed to be so? The idea as I understand is to treat each graph aggregation step separately, but there is an rnn unit particularly for handling this.
Thanks for wanting to take care of that :)
I'm not exactly sure how to tackle this problem yet. I think we should first collect some ideas on how to approach this feature.
One idea I have in mind is to provide a module that converts each GNN into a R-GNN, e.g.:
conv = RGNN(GATConv(...), num_relations=...)
which would have the benefit of leaving most GNN models mostly unchanged.
Another approach is to make explicit use of an edge_type
property in MessagePassing
. Everytime MessagePassing.propagate
encounters the edge_type
property, it will take care of splitting the data according to edge types and call message
and aggregate
multiple times. This would simplify creating R-GNN layers in PyTorch Geometric, but requires some work to make all existing GNN layers make use of edge_type
by default. I like providing this functionality directly in MessagePassing
, but on the other hand this can become bloated quite fast.
One idea I have in mind is to provide a module that converts each GNN into a R-GNN
I think that's would be very convenient api to use! At the same time it seems not directly related to edge_type
support. I've implemented it in GGCN https://github.com/rusty1s/pytorch_geometric/pull/1854 , but only in one data format, since the the one based on SparseTensor would require modification of MessagePassing.
🚀 Feature
The current implementation of the Gated Graph Neural Network (class GatedGraphConv) doesn't support typed edges. While the original paper [1] doesn't talk about this in detail, the MSR implementation of the Graph Neural Network [2] supports typed edges by having a set of different learnable parameters for each edge type.
[1] https://arxiv.org/pdf/1511.05493.pdf [2] https://github.com/microsoft/tf-gnn-samples/blob/master/gnns/ggnn.py