DeepGraphLearning / KnowledgeGraphEmbedding

MIT License
1.24k stars 264 forks source link

Loss function of TransE and RotatE in the code #26

Closed ngl567 closed 4 years ago

ngl567 commented 4 years ago

Thank you for your excellent research and codes. However, I am confused about why you use the same loss function for TransE and RotatE? I think the loss functions of TransE and RotatE are different according to their definitions in the orginal paper. I hope you can explain it. Thank you.

Edward-Sun commented 4 years ago

Hi Guanglin,

We use the same loss function (i.e., self-adversarial loss) for both TransE and RotatE because we think that the proposed loss function is a general loss function that can be applied to any transition-based KGE models. As you can see and as you can reproduce, the self-adversarial loss can improve the performance of TransE to 0.332, which is much higher than previous results.

Besides, since there's no much difference between negative sampling loss and margin-based ranking criterion for TransE (see our Table 13), we simply use the negative sampling loss for TransE.

ngl567 commented 4 years ago

Thanks a lot for your careful explaination. Wish you more success.