Closed SuShu19 closed 2 weeks ago
Short answer: no, the functions don't compute the same.
The query_topn
function doesn't filter the positive (i.e. known) triples of the graph, so it will return triples that the model was trained on. These triples are usually ranked quite high and therefore inflate the hits@10 metric in comparison to evaluate_performance
.
Description
I'm trying to evaluate the recommendation result of a tool. I have trained a model using ampligraph's model, however, the problem appears when I tried to evaluate the model on the recommendation.
Actual Behavior
Firstly, I used
evaluate_performance
function and limited the parameterentities_subset
to the type of entity I want to recommend. Secondly, I usedquery_topn
function to valid the recommend metrics obtained byevluate_performance
. What's strange is the the hit@10 inquery_topn
is much lower that it inevaluate_performance
.I'm wondering should hit@10 in
query_topn
be as same as it inevaluate_performance
?