DeepGraphLearning / KnowledgeGraphEmbedding

MIT License
1.24k stars 264 forks source link

Scoring functions and Adversarial Loss Parameters #10

Closed sumitpai closed 5 years ago

sumitpai commented 5 years ago

I had 3 queries related to loss function and scoring functions:

a. Why is the margin a part of the scoring function for transE and rotatE. Doesn't it actually change the scores when you do prediction(Eg: if margin is 1 then (1-score) is different from score. Each approach would result in totally different ranks during prediction).

b. Margin is not directly part of the adversarial loss. It is a part of the scoring function as described above. However, this is not the case for complex and distmult in this implementation. Is it equivalent to saying that for these two models you are setting the hyper parameter margin=0 in the loss function? What does this mean, since you are using a margin based loss for optimisation?

c. Have you tried Rotate with other margin based losses? How does it perform compared to Complex/HolE?

Edward-Sun commented 5 years ago

a. This is part of how we implement the negative sampling loss. I agree that moving the margin into the loss function is also a valid approach and will make the codes more clear. This won't change the prediction results because the margin is fixed for the same model and the prediction is based on ranking.

b. Yes, for semantic matching based models, the margin is 0. Because unlike distance-based models, for these semantic matching models, the margin doesn't work. The scores of these models can range from -inf to inf. (So it doesn't matter what the margin is)

c. I have tried margin-based ranking criterion loss and reported it in Table 13 of our paper. I haven't tried other margin based losses.