dmlc / dgl

Python package built to ease deep learning on graph, on top of existing DL frameworks.
http://dgl.ai
Apache License 2.0
13.44k stars 3.01k forks source link

tutorials and models for point clouds dataset #719

Closed yangj257 closed 2 years ago

yangj257 commented 5 years ago

point clouds processing is also an important application area for GCN,But I didn't find anything related in the tutorial.Could you please add tutorials and add examples about processing point cloud datasets(the small dataset like ModelNet10, S3DIS, and the big dataset like Semantic 3D). If tutorials include examples about converting the point cloud dataset into a graph dataset,and implementing the models in some papers like“Graph Attention Convolution for Point Cloud Segmentation” (paper link: https://engineering.purdue.edu/~jshan/publications/2018/Lei%20Wang%20Graph%20Attention%20Convolution%20for%20Point%20Cloud%20Segmentation%20CVPR2019.pdf), that will be much better.

EtoDemerzel0427 commented 5 years ago

Yes, I noticed that compared with Pytorch Geometric, DGL gives us fewer examples and tutorials. I think to make an open-source library popular, you have to pay more attention to the documentation so as to smooth the learning curve for new users.

It will be more attractive if you could keep track of the recent advances of more GCN models and implement them in DGL(for example, read this or this survey, and implement all the models they mentioned).

To be frank, researchers will prefer a library with more built-in models and detailed documentation than just fast or powerful.

jermainewang commented 5 years ago

Hey there, paste my reply to the other similar question on the discussion forum (https://discuss.dgl.ai/t/dgl-vs-pytorch-geometric/346/6).

Overall, I think both frameworks have their merits. PyG is very light-weighted and has lots of off-the-shelf examples. In DGL, we put a lot of efforts to cover a wider range of scenarios. Many of them are not necessarily GNNs but share the principles of structural/relational learning. Examples are CapsuleNet, Transformer and TreeLSTM. We also notice that graph generative models are important and could be quite flexible (i.e, adding/removing one node/edge at a time), so we spend extra efforts in the design for them (examples are DGMG and JTNN). Many of them have not been covered by other frameworks.

Real world graph can be gigantic, and training on large graphs requires special supports. That's why we developed fused message passing technique and you could see how important is it in our blog. We also have supported graph sampling and distributed training, and have examples and tutorials ready. Please check out.

The graph/structure deep learning community is still at the stage of rapid growth. Many new ideas are being developed and at the same time many new users are right at the door curb. That's why we want our curated models/examples to be as complete as possible, best with accompany tutorials and blogs. Writing them is definitely time-consuming, leading to a long cycle to add new ones. We are working hard on this and any community help is extremely welcomed.

Like you said, having more built-in models is important and we are well-aware of that! But at the same time, we also need to think about how to keep the library ahead-of-time in a longer-term. That is said, DGL is NEVER only about fast and powerful (that's a misunderstanding of our goal). Many of the features we are pushing (for example, heterograph) are to pave the road to harder and more realistic problems. At the current stage of GNN research, we feel that urge is more popular.

As said, there are plenty of rooms for DGL to improve. The next release will include many new features: heterogenous graph, pooling/unpooling module, better data i/o, etc. If you have feature or model requests, please reply in the roadmap issue.

EtoDemerzel0427 commented 5 years ago

@jermainewang Thanks for your immediate reply! I want to make it clear, I really appreciate your work and like your designs. In fact I am a new user and have just started to use a framework to accelerate my research on GNNs, and I guess most of the users are for similar purposes. I respect your ambitions of the future development of DGL, but for today's users, they care more about different things. Admittedly, writing documentation and tutorials is time-consuming, but it is still surprising that it is 2019 and your tutorials still have not covered models like GraphSAGE, one of the most influential spatial variants of GNNs who firstly shed light on inductive learning of graphs.

I'm sorry for the possible misunderstanding, like I said, I am new to DGL. But I am just addressing my confusion as a user and I said so because I hope DGL can be even better.

jermainewang commented 5 years ago

Hey @EtoDemerzel0427 , please don't apologize. Your suggestions are very helpful, and that's what we need to improve. I've added GraphSAGE model/tutorial request to our v0.4 roadmap. If you have other requests, please don't hesitate to ask in that thread. Thanks again for caring about DGL!

yangj257 commented 5 years ago

Hi,@jermainewang , I would like to ask, will point cloud processing tutorials and related models be added in v0.4? (models in some papers like“Graph Attention Convolution for Point Cloud Segmentation”). And When will v0.4 be released? Could you please give me a approximate time?

jermainewang commented 5 years ago

Yes, we'd love to. In fact, @mufeili is looking into them. Would you please also help sort out a list that you feel are mostly representative? Thank you.

Our initial release goal is by the end of August. Will start a poll in the roadmap thread to get feedback from the team.

yangj257 commented 5 years ago

Hey, @jermainewang ,Thanks for your immediate reply! In recent years, the most influential work in point cloud research (spatial domain) mainly includes:

  1. Edgeconv.The article’s name: Dynamic Graph CNN for Learning on Point Clouds.
  2. GACNet.The article’s name: Graph Attention Convolution for Point Cloud Segmentation.
  3. Superpoint Graph.The article’s name: large-scale Point Cloud Semantic Segmentation with Superpoint Graphs.
  4. Deep GCNs.The article’s name: Can GCNs Go as Deep as CNNs? These articles are searchable for download by title.I am also a novice.The above may not be comprehensive, anyone can add.

I think it takes too much time to add all the models in v0.4.You can add the most basic. At the very start, tutorials or examples about converting the point cloud dataset into a graph dataset, and then one or two classic and useful models, such as GACNet and Edgeconv.Then add the rest in v0.5 or v0.6.

yzh119 commented 5 years ago

@EtoDemerzel0427 thanks for your comments, we would soon include a bunch of GNN models and wrap them as dgl modules: #748 , would you like to give us some suggestions?

yangj257 commented 4 years ago

Hey, @jermainewang,In previous releases, you and your team have implemented Edgeconv, that's great! However, I still find that there are too few models related to point cloud in DGL. Could you please add some more models? For example, the recent paper "GAPNet: Graph Attention based Point Neural Network for Exploiting Local Feature of Point Cloud"(paper link:https://arxiv.org/pdf/1905.08705.pdf).Thank you very much!