Closed swathi-ssunder closed 6 years ago
**The first loop** is used to test embeddings with the default metric, and the results are like as follows,
(?, r, t) : Mean rank, Hit 10
(?, r, t) : Mean rank(filter), Hit 10(filter)
(h, r, ?) : Mean rank, Hit 10
(h, r, ?) : Mean rank(filter), Hit 10(filter)
**The second loop** is used to test embeddings with type constraints, you can find detail information from "Type-Constrained Representation Learning in Knowledge Graphs".
**The next four loops** are used to test embeddings by mapping properties of relations. All relations will be split into four categories (1-1, 1-n, n-1, n-n) and evaluate models for each category. The results are like as follows,
1-1:
(?, r, t) : Mean rank, Hit 10
(?, r, t) : Mean rank(filter), Hit 10(filter)
(h, r, ?) : Mean rank, Hit 10
(h, r, ?) : Mean rank(filter), Hit 10(filter)
1-n
(?, r, t) : Mean rank, Hit 10
(?, r, t) : Mean rank(filter), Hit 10(filter)
(h, r, ?) : Mean rank, Hit 10
(h, r, ?) : Mean rank(filter), Hit 10(filter)
n-1
(?, r, t) : Mean rank, Hit 10
(?, r, t) : Mean rank(filter), Hit 10(filter)
(h, r, ?) : Mean rank, Hit 10
(h, r, ?) : Mean rank(filter), Hit 10(filter)
n-n
(?, r, t) : Mean rank, Hit 10
(?, r, t) : Mean rank(filter), Hit 10(filter)
(h, r, ?) : Mean rank, Hit 10
(h, r, ?) : Mean rank(filter), Hit 10(filter)
Can you explain the evaluation metrics?
I understand that the evaluation metrics most likely correspond to the Mean Rank and Hits in Top 10(as described in the TransE paper), but am not able to fully comprehend the code here.
Thanks.