graphdeeplearning / graphtransformer

Graph Transformer Architecture. Source code for "A Generalization of Transformer Networks to Graphs", DLG-AAAI'21.
https://arxiv.org/abs/2012.09699
MIT License
872 stars 134 forks source link

Details of the laplacian encoding #3

Closed bfabiandev closed 3 years ago

bfabiandev commented 3 years ago

Hi,

Neat work! I was looking at the implementation of the laplacian encoding, and some things weren't clear.

def laplacian_positional_encoding(g, pos_enc_dim):

    # Laplacian
    A = g.adjacency_matrix_scipy(return_edge_ids=False).astype(float)
    N = sp.diags(dgl.backend.asnumpy(g.in_degrees()).clip(1) ** -0.5, dtype=float)
    L = sp.eye(g.number_of_nodes()) - N * A * N

    # Eigenvectors with numpy
    EigVal, EigVec = np.linalg.eig(L.toarray())
    idx = EigVal.argsort() # increasing order
    EigVal, EigVec = EigVal[idx], np.real(EigVec[:,idx])
    g.ndata['lap_pos_enc'] = torch.from_numpy(EigVec[:,1:pos_enc_dim+1]).float() 

    return g

Why do you drop the first eigenvector in the last line (i.e. why do you run use indexes 1:pos_enc_dim+1)? Does this come from the assumption that the first eigenvalue will be very close to 0?

    g.ndata['lap_pos_enc'] = torch.from_numpy(EigVec[:,1:pos_enc_dim+1]).float() 

Another quick Q, could you explain why you need to use np.real in the line below? Are there any cases when we would have complex numbers here?

   EigVal, EigVec = EigVal[idx], np.real(EigVec[:,idx])

Thanks in advance!

vijaydwivedi75 commented 3 years ago

Hi @bfabiandev, thanks for your interest and query.

For the first part, yes. The idea is to use k smallest non-trivial eigenvectors.

For the second question, there can be outputs with complex eigenvectors, such as in this case of a random graph:

example

Ps. Code snippet at this link. Vijay