Closed Woooooody closed 4 years ago
Hi @Woooooody, we have made all our experimental artifacts available at https://github.com/pykeen/benchmarking. The best TransE configuration that we found for WN18RR is available at https://github.com/pykeen/benchmarking/tree/master/ablation/results/transe/wn18rr/random/adam/2020-05-20-03-15_00b22931-c88f-4d90-a606-c94a31a80516/0003_wn18rr_transe/best_pipeline. Please note that we re-named the OWA training loop to SLCWA. The configuration files contain the previous nomenclature.
I got the result of TransE on WN18RR dataset as follows, with 100 epoch and default parameter settings.
mean_rank={'best': 7396.350889192887, 'worst': 7396.3558481532145, 'avg': 7396.353368673051}, mean_reciprocal_rank={'best': 0.11426725607718224, 'worst': 0.11426725590337292, 'avg': 0.1142672559902559}, hits_at_k={'best': {1: 0.0032489740082079343, 3: 0.19493844049247605, 5: 0.24811901504787962, 10: 0.2973666210670315}, 'worst': {1: 0.0032489740082079343, 3: 0.19493844049247605, 5: 0.24811901504787962, 10: 0.2973666210670315}, 'avg': {1: 0.0032489740082079343, 3: 0.19493844049247605, 5: 0.24811901504787962, 10: 0.2973666210670315}}, adjusted_mean_rank=0.36492311337297334.
it seems a bit lower than the result reported in the paper