graphdeeplearning / graphtransformer

Graph Transformer Architecture. Source code for "A Generalization of Transformer Networks to Graphs", DLG-AAAI'21.
https://arxiv.org/abs/2012.09699
MIT License
872 stars 134 forks source link

Sparse graph and full graph #9

Closed immortal13 closed 3 years ago

immortal13 commented 3 years ago

Thanks for the innovative work! Could you please tell me how can we get a full graph?Did full graph mean the full attention map?Did sparse graph mean that we only retain the immediate neighbor nodes‘ value of full graph?

vijaydwivedi75 commented 3 years ago

Hi @immortal13, In sparse graph experiments, the original graph is used which means a node attends to its local neighbors only. In full graph experiments, a new graph is created with each node connected to every other nodes which means a node attends to all other nodes in a graph.

The code that makes full graph, for example on SBM datasets, is: https://github.com/graphdeeplearning/graphtransformer/blob/011da218f89a7c55a342ad0a4b8440ca0f2223cc/data/SBMs.py#L123-L142

immortal13 commented 3 years ago

Thanks for you prompt reply!