Hi, I'm very interested in your work and I'm quite new to knowledge graph. And I was reproduce your results in the paper with the default code and the dataset given, the WN18RR datasets seem to work well with your given command line, and the results are always a bit higher than the paper's results.
But when I use nell_v1 for training and nell_v1_ind for testing, the Hits@10 and auc_pr is much lower than the paper's results. I want to make sure if this is the right way to run the code for this dataset(the given command line with dataset being replaced)? Should I tune other parameters? If so, can you please give me a hint about which parameters influence the performance the most if that's available?
Hi, I'm very interested in your work and I'm quite new to knowledge graph. And I was reproduce your results in the paper with the default code and the dataset given, the WN18RR datasets seem to work well with your given command line, and the results are always a bit higher than the paper's results.
But when I use nell_v1 for training and nell_v1_ind for testing, the Hits@10 and auc_pr is much lower than the paper's results. I want to make sure if this is the right way to run the code for this dataset(the given command line with dataset being replaced)? Should I tune other parameters? If so, can you please give me a hint about which parameters influence the performance the most if that's available?
Thanks for you kind response!