JD-AI-Research-Silicon-Valley / SACN

End-to-end Structure-Aware Convolutional Networks for Knowledge Base Completion
MIT License
113 stars 30 forks source link

Is there any result about using GCN as encoder? #6

Closed 1049451037 closed 5 years ago

1049451037 commented 5 years ago

I think the result is necessary to prove that SACN not only works, but also works better than the un-structure-aware GCN.

chaoshangcs commented 5 years ago

Hi, thank you so much for your advise. The basic idea for our paper is the "X-GCN + ConvTransE". Here the "X-GCN" could be traditional GCN, weighted GCN, graph attention network or others. In our code, we only provide the weight GCN. I believe it's worth trying using other ways. If you wanna to use the traditional GCN, I can provide one way as following:

  1. In "main.py", when you create the sparse matrix between row 130 and row 152, you can replace row 148 by: data = data + [1 for i in range(num_entities)]
  2. Replace the forward function in class GraphConvolution by: A = torch.sparse_coo_tensor(adj[0], adj[1], torch.Size([adj[2],adj[2]]), requires_grad = True) A = A + A.transpose(0, 1) support = torch.mm(input, self.weight) output = torch.sparse.mm(A, support) ...

Thanks for your message! If you have any question, please feel free and email me. Thanks!

1049451037 commented 5 years ago

@chaoshangcs Thank you for your reply!