dmlc / dgl

Python package built to ease deep learning on graph, on top of existing DL frameworks.
http://dgl.ai
Apache License 2.0
13.34k stars 3k forks source link

[Roadmap] v0.2 release checklist #302

Closed jermainewang closed 5 years ago

jermainewang commented 5 years ago

Thanks everyone for the hard work. We really did a lot for a smooth beta release. With the repo being opened and more community help in-coming, it is a good time to figure out the roadmap to v0.2 release. Here is a draft proposal, feel free to reply, comment and discuss about this. Note that the list is long but we could figure out the priority later. We'd like to hear your opinions and push DGL to a next stage.

Model examples

Core system improvement

Tutorial/Blog

Project improvement

Deferred goals

(will not be included in this release unless someone takes over)

BarclayII commented 5 years ago

M2C.

Models:

Core system improvement:

Project Improvement:

Others:

zheng-da commented 5 years ago

I think we should support an operator that does dot(X1, X2.T) * adj. This is more general than spmv. When we generalize "multiply" and "addition", it'll be more general than generalized spmv. I think it's useful for transformer.

zheng-da commented 5 years ago

BTW, any action item for accelerating GAT?

jermainewang commented 5 years ago

I think we should support an operator that does dot(X1, X2.T) * adj. This is more general than spmv. When we generalize "multiply" and "addition", it'll be more general than generalized spmv. I think it's useful for transformer.

@zheng-da Is it similar to "sparse src_mul_dst" ?

zheng-da commented 5 years ago

I see what you mean by src_mul_dst. I think so. We can use this form of operations to accelerate other models such as GAT (actually, any models that use both source vertices and destination vertices in the edge computation).

How are we going to implement these operators? in DGL or in the backend? If we implement it in DGL, how to support async computation in MXNet?

BarclayII commented 5 years ago

BTW, any action item for accelerating GAT?

That will be sparse softmax I proposed?

How are we going to implement these operators? in DGL or in the backend? If we implement it in DGL, how to support async computation in MXNet?

Seems that PyTorch operators can be implemented externally (https://github.com/rusty1s/pytorch_scatter) so putting that in DGL repo should be fine.

I don't know if/how external operators can hook into MXNet; should we compile MXNet from source? Also I guess MXNet can implement these operators in their own repo regardless, since having these sparse operators should be always beneficial?

jermainewang commented 5 years ago

In terms of implementation, it's better to be in DGL so it can be used in every framework. In general, we should follow the guidance of each framework on implementing custom operators (such as this guide in pytorch). We should avoid dependencies on the framework's C++ libraries. This leaves us few choices including: (1) Use python extension. Such as https://mxnet.incubator.apache.org/tutorials/gluon/customop.html . (2) Use dynamic library. Such as https://pytorch.org/docs/stable/cpp_extension.html . Don't know about MX's solution yet. But we should investigate.

In terms of async, is MX's CustomOp async or not?

VoVAllen commented 5 years ago

Is there any plan for group_apply_edges API? I think this would be useful since we cannot do out-edges reduction at current stage.

zheng-da commented 5 years ago

Previously, we discussed caching the results from the schedulers. It helps us avoid the expensive scheduling. I just realized that there is a lot of data copy from CPU to GPU during the computation even though we have copied all data in Frame to GPU. The data copy occurs on Index (I suppose Index is always copied on CPUs first). Caching the scheduling result can also help avoid data copy from CPU to GPU.

jermainewang commented 5 years ago

@zheng-da , agree. This should be put on the roadmap.

Is there any plan for group_apply_edges API? I think this would be useful since we cannot do out-edges reduction at current stage.

This is somewhat related to the sparse softmax proposed by @BarclayII. In my mind, there are two levels. The lower-level is group_apply_edges that can be operated on both out-edges and in-edges. Built atop of it is the "sparse edge softmax" module that is widely used in many models. Agree this should be put in our roadmap.

BarclayII commented 5 years ago

I assume we also need a "sparse softmax" kernel (similar to TF)? What I was thinking is to have group_apply accept a node UDF with incoming/outgoing edges (similar to the ones for reduce functions). sparse_softmax could be one of such built-in UDFs.

zheng-da commented 5 years ago

We should add more MXNet tutorials in the website.

zheng-da commented 5 years ago

In terms of the implementation of the new operators, CustomOp in MXNet might not be a good way of implementing new operators. It's usually very slow. For performance, it's still best to implement them in the backend frameworks directly. At least, we can do it in MXNet. Not sure about Pytorch.

jermainewang commented 5 years ago

Do you know why is it slow? It might be a good chance to improve that part. Also we need to benchmark Pytorch's custom op to see how much overhead it has. We should try our best to have them in DGL. Otherwise, it will be really difficult to maintain them in every frameworks.

zheng-da commented 5 years ago

It calls Python code from C code. Because the operator is implemented in Python, it's expressiveness is limited. Implementing sparse softmax efficiently in Python is hard.

eric-haibin-lin commented 5 years ago

For sparse softmax I created a feature request at MXNet repo: https://github.com/apache/incubator-mxnet/issues/12729

VoVAllen commented 5 years ago

Minor suggestion for project improvement: Switch from nose to pytest for unittest. Mainly for two reasons:

AIwem commented 5 years ago

graph_nets?

jermainewang commented 5 years ago

graph_nets?

@Alwem, could you elaborate?

AIwem commented 5 years ago

@jermainewang Have you consulted the idea of the graph_nets model? Some of their solutions seem to be good!

jermainewang commented 5 years ago

We did some investigation of graph_nets. We found that DGL could cover all the models in graph_nets. Maybe we miss something. Could you point out?

ghost commented 5 years ago

can node2vec with side information train on DGL, node2vec has a random walking to get the sequence.

In the future, GraphRNN will be add to DGL? it is more performance snap on large dataset

jermainewang commented 5 years ago

Hi @HuangZhanPeng , thank you for the suggestion. It will be great if you could help contribute node2vec and GraphRNN to DGL. From my understanding, the random walk can be done in networkx first and then used in DGL. GraphRNN is similar to the DGMG model (see our tutorials here) in that it is a generative model trained on a sequence of nodes/edges. I guess there will be many shared building blocks between the two.

ghost commented 5 years ago

@jermainewang Thank you for your response. In my actual work, node2vec's random walk on networkx is not available with large scale data. If there is time, I really want to try to implement graphrnn in dgl.

jermainewang commented 5 years ago

@HuangZhanPeng There is always time :). Please go ahead. If you encounter any problems during the implementation, feel free to raise questions on https://discuss.dgl.ai. The team is very responsive. About the random walk, @BarclayII is surveying the common random walk algorithm and we might include APIs for them in our next release.

jermainewang commented 5 years ago

Just updated the roadmap with a checklist. Our tentative date for this release is this month (02/28).

For all committers @zheng-da @szha @BarclayII @VoVAllen @ylfdq1118 @yzh119 @GaiYu0 @mufeili @aksnzhy @zzhang-cn @ZiyueHuang , please vote +1 if you agree with this plan.

BarclayII commented 5 years ago

I would rather reply with emoticon. +1 as reply would pollute the thread.

jermainewang commented 5 years ago

The release plan passed by voting.

lgalke commented 5 years ago

May I kindly ask whether there is an updated tentative date for the 0.2 release? I'm desperately waiting for some features and unfortunately cannot build dgl from-source on the server. Thanks for your efforts!

jermainewang commented 5 years ago

@lgalke Thanks for asking. Our release date is delayed for a week due to some performance issues found recently. We are waiting for the final PR to be merged #434 so you could expect a new release in 2 days !! It's our first major release after open source so we are still adapting to the release process. Thank you for your patience.

jermainewang commented 5 years ago

v0.2 has just been officially released. Thanks everyone for the support!