pyg-team / pytorch_geometric

Graph Neural Network Library for PyTorch
https://pyg.org
MIT License
21.35k stars 3.66k forks source link

Python-based library would be too slow for cases of very large graphs #7462

Open yurivict opened 1 year ago

yurivict commented 1 year ago

🐛 Describe the bug

I have a set of extremely large graphs on which I would like to train a NN to classify nodes.

The training procedure would have to go through a lot of slow Python code to compute "aggregate", "combine" functions, and also "forward" and other such functions all the time.

It seems that Python-based technology might be ok for regular NN training when simple data tables are just passed to the C++ code in TF or PyTorch, but graph training would suffer from slow Python code a lot. Particularly in cases of extremely large graphs and very long training Python would slow the process a lot.

Do you have plans to implement the same algorithms in C++? Maybe make a set of classes for the mlpack AI framework that do the same?

Environment

rusty1s commented 1 year ago

I am not sure I understand. What are the slow Python codes you refer to? In the end, all important building blocks leverage highly optimized C++/CUDA code, and the Python interpreter just calls these routines. Since CUDA is asynchronous, there should be zero delay in using Python for this.

yurivict commented 1 year ago

In case the user would like to experiment with "aggregate" and "combine" functions, these functions would have to be defined in Python. Then they would be run in Python many times for each node evaluation which would be very slow.

rusty1s commented 1 year ago

I assume this depends on how you actually define aggregate and combine. In many cases, these can be efficiently implemented via sparse aggregations or padding.

LukeLIN-web commented 1 year ago

If you use the Pytorch method to define aggregate and combine, it will leverage highly optimized C++/CUDA code.