awslabs / dgl-ke

High performance, easy-to-use, and scalable package for learning large-scale knowledge graph embeddings.
https://dglke.dgl.ai/doc/
Apache License 2.0
1.28k stars 196 forks source link

The loss function issue which is not the same as TransE #109

Closed Zarca closed 4 years ago

Zarca commented 4 years ago

Hi, These days I have been using the out-of-box TransE algorithm come with DGL-KE , thanks for your excellent and kind work ! However, I also encountered a quesion about the loss funciton while I am tracing down to the source code about it in the: dklke/models/general_models.py in method forward, lines between 370 and 399 as figures listed below:

DGL_ISSUE

DGL_ISSUE2

It seems that it's NOT consistent with the loss function described in the paper on dgl-ke's official github homepage, as the figure showed below:

DGL_ISSUE4

In this paper, the loss you author declared to be usued has just these 2 forms as below: DGL_ISSUE3

but is not the same with the implemented as I mentioned above in dgk-ke's source code, so I'm wondering that why the source code of general_models.py has changed the loss form? Dose it make any improvement compared with the oringinal two kind of loss function in your paper?

Looking forward your reply

classicsong commented 4 years ago

The adversarial sampling is in 3.3 Negative sampling in the paper. It is not a uniform sampling.