graphdeeplearning / graphtransformer

Graph Transformer Architecture. Source code for "A Generalization of Transformer Networks to Graphs", DLG-AAAI'21.
https://arxiv.org/abs/2012.09699
MIT License
872 stars 134 forks source link

laplacian positional encoding #12

Closed aristo-panhu closed 3 years ago

aristo-panhu commented 3 years ago

how to deal with the laplacian positional encoding of directed graph? The adjacency matrix of a directed graph is not an identity matrix.

vijaydwivedi75 commented 3 years ago

Hi @aristo-panhu, can you please clarify a little on the query? Usually, eigenvectors of (other) matrix of the graph structure (Laplacian is an eg.) can be used as well to compute PEs.

aristo-panhu commented 3 years ago

I use your method to encode Directed Acyclic Graph(DAG), which adjacency matrix is a nilpotent matrix(which is a upper triangular matrix and its diagonal element is 0). I use your method to calculate its Laplace eigenvector. I find that its eigenvalues all are 1, and only the first value of all eigenvectors is non-0. Therefore, such eigenvector cannot contain the structure information of the graph. How to deal with this situation?