Closed ysysys666 closed 1 month ago
Hi! Thank you for your interest in our work!
When running evaluate_map.py, it is correct that these parameters should be set to their default values:
Given these settings, your results for CORA should closely match those reported in the paper (using n_hardnegatives=5 for Difficulty-based benchmarks and n_hardnegatives=2 for Attribute-based benchmarks). If your results differ slightly but are still in close range, this could be due to minor numerical instabilities and shouldn't be a cause for concern.
I hope this helps!
Hello author, Thank you for your excellent work. I tried to reproduce the evaluation results of the model in your paper. Here I have two questions to ask you,hope to get your reply: When I use evaluate_map.py, what is the value of simplify_errors, disable_nms, remove_pacco, evaluate_all_vocabulary set to?
When I set both of them by default, the evaluation result of cora is higher than that in the paper, is there anything wrong with me, that n_hardnegatives is set to 5 and 2 respectively under both Settings?