Closed sophiakrix closed 2 years ago
Hi Sophia,
Thanks for your interest!
Based on your issue, we downloaded and rerun the code for a double-check. It seems that the results can be easily reproduced by example.sh. Please refer to the full logs in this link. You can also check the versions of libraries in the environment you run the code.
2021-11-06 06:07:51 INFO Best Val Metrics hits@1_list at step 79000: 0.764897 2021-11-06 06:07:51 INFO Best Val Metrics hits@3_list at step 79000: 0.880364 2021-11-06 06:07:51 INFO Best Val Metrics hits@10_list at step 79000: 0.951926 2021-11-06 06:07:51 INFO Best Val Metrics mrr_list at step 79000: 0.831657 2021-11-06 06:07:51 INFO Best Test Metrics hits@1_list at step 79000: 0.764524 2021-11-06 06:07:51 INFO Best Test Metrics hits@3_list at step 79000: 0.879791 2021-11-06 06:07:51 INFO Best Test Metrics hits@10_list at step 79000: 0.951277 2021-11-06 06:07:51 INFO Best Test Metrics mrr_list at step 79000: 0.831190
As for hyper-parameter search, we just use hyperopt to search for a few configurations in the range we provided. Grid-search is not used since it is slow. There may exist some better configurations after a more thorough search of the hyper-parameters.
Best wishes, Yongqi
Hi there,
I am trying to reproduce the results you have on the ogbl-biokg knowledge graph. You name some tuned hyperparameters in the README and I am a bit confused about that. Since there are multiple values for each hyperparameter given there, I am not sure which one of those is the one that was found to deliver the best result. Did you do a grid search over them? And if you did, could you possibly share the code? Since right now, I am not able to reproduce the scoring results when I run the examples.sh script.
Thanks in advance!
Best,
Sophia