Closed jermainewang closed 5 years ago
Some additional items:
For API, I suggest providing more flexible message function to handle different type of edges. For example, In tree-lstm we are supposed to send different messages for left branch and right branch.
We plan to implement a light-weight kvstore instead of using MXNet KVStore for DGL. Do we consider add this item to 0.3 version?
kernel support:
NN module:
dataset:
Scheduling improvement:
Giant graph:
Model examples:
If time permits, we might add GPU samplers to accelerate mini-batch training in a single GPU.
We plan to implement a light-weight kvstore instead of using MXNet KVStore for DGL. Do we consider add this item to 0.3 version?
0.3 version will be a fast release. How about we plan it for 0.4 release?
We plan to implement a light-weight kvstore instead of using MXNet KVStore for DGL. Do we consider add this item to 0.3 version?
0.3 version will be a fast release. How about we plan it for 0.4 release?
No problem.
Looking forward to supports for giant graphs!
A few models I'd like to see:
Hi @alexvpickering , will look into it. One question, do you expect to see them as layers in NN packages or complete examples with data loading/training/testing ?
Thanks @jermainewang! I was thinking examples like found in e.g. dgl/examples/pytorch/sgc.
will some model in Euler like LsHNE, LasGNN, ScalableGCN implement in the feature? These algorithm are like Pinsage Application in production environment and support Heterogeneous Network.
@HuangZhanPeng We will work on heterogeneous graphs in the 0.4 release. We'll evaluate these models.
It would be great if you could also implement APPNP! It's quite simple and performed best in PyTorch Geometric's benchmark, so people could clearly benefit here as well. :)
Thank you for the suggestion @klicperajo . I think we've already had one implementation for this work :), though we may need to tune it more carefully to achieve a better performance. See dgl APPNP example.
Oh, nice! I didn't notice since it's not in the summary table. Yes, there are a couple of details to get the last tens of percent accuracy. Should be all in the paper, though. :)
Just updated the roadmap with a checklist. Our tentative date for this release is 06/07.
For all committers @zheng-da @szha @BarclayII @VoVAllen @ylfdq1118 @yzh119 @GaiYu0 @mufeili @aksnzhy @zzhang-cn @ZiyueHuang , please vote with :+1: if you agree with this plan.
did you mean 6/7?
@yzh119 will be in charge of edge/node removal
@jermainewang What do you need for issues under NN module?
One question: do we need to rebuild node/edge index when calling node/edge removal? For example:
import dgl
import torch as th
g = dgl.DGLGraph()
g.add_nodes(5)
g.ndatas['x'] = th.rand(5, 3)
g.del_nodes([2, 3])
print(g.nodes())
What should be the output? tensor([0, 1, 4])
?
I must say, if the node index and edge index have to be rebuilt after node/edge removal, the behavior of these operations would be VERY CONFUSING. I can't see any scenario that these operations would make any sense.
I think we need to return the removed edges, and let user handle the mapping between original index and modified index. (or provide utils function)
@VoVAllen , I don't see any benefits of doing this compared to creating a subgraph of the current graph.
@jermainewang What do you need for issues under NN module?
(1) EdgeSoftmax needs to be re-implemented using the new builtins; (2) GAT as an NN module. Both depend on the kernel branch to be fully merged.
@mufeili do you have time take over these two items?
dgl.to_simple_graph
API to remove multi-edgesdgl.to_bidirected
API to transform the graph to bidrectional. (Issue #419)@jermainewang I may need to wait till weekend for starting implementation. If that's good for you, I'll take over them.
I've taken to_simple_graph
. The rest is yours.
v0.3 has been released. Thanks everyone!
Here is the v0.3 release plan. The tentative release date is 06/07.
[Feature] Kernel support
Kernels are critical for our system performance. The next release will include all the basic building blocks and APIs for future extensions.
src_op_edge
,src_op_dst
,edge_op_dst
,dst_op_edge
, etc.[Feature] Giant graph support
Release a demo about how to train GNNs on giant graphs that cannot be hosted in a single GPU memory. This includes:
[Enhancement] NN module
[Enhancement] graph structure
DGLGraph.to(device)
(Issue #503 @HQ01 PR #600 )dgl.to_simple_graph
API to remove multi-edges (#587 @jermainewang )dgl.to_bidirected
API to transform the graph to bidrectional. (#598 @mufeili )Model Examples
Bug fix
Features postponed to v0.4
[Feature] DGL graph data format As we want to include more popular graph dataset in DGL, it is time to decouple dataset with DGL repo.
[Feature] Bipartite (k-partite) graph API Bipartite graph is popular in recommendation setting. We heard many requests for this.
[Enhancement] Improve IR & scheduling system