graphdeeplearning / graphtransformer

Graph Transformer Architecture. Source code for "A Generalization of Transformer Networks to Graphs", DLG-AAAI'21.
https://arxiv.org/abs/2012.09699
MIT License
889 stars 137 forks source link

Scaling of Laplacian pre-computation #8

Closed JellePiepenbrock closed 3 years ago

JellePiepenbrock commented 3 years ago

First, I would like to say that I think there are some very good ideas in the paper. Nice work! I have some questions though:

Could you tell me what the largest graph is that you've used this approach on? Do you have any recommendations for Laplacian eigenvector encodings for large graphs? The way it's implemented now, using np.linalg.eig and the .to_array() call, which seems to lose the sparsity, could give some problems.

vijaydwivedi75 commented 3 years ago

Hi @JellePiepenbrock, The largest graph size (i.e. number of nodes in a single graph) that is considered in our experiments is 190 (for SBM). Thank you for raising the issue of scaling of pre-computation -- something we shall focus in future works.