Closed artiit closed 4 years ago
Hi. The original paper only uses 50
for the latent dimension size (we use 400
). You can see this paper https://arxiv.org/pdf/1903.12287.pdf (Table-2) to compare our results with pytorch-biggraph (using 400
dim_size) Actually if we set dim_size to 400
, the HITS@10 result of the original paper implementation is 0.74
. You can also see some other open-source tools' results: https://github.com/thunlp/KB2E In KB2E, using dim_size 100
the HITS@10 is 0.702
.
Hi, I found that your hit@10 result on transE is 0.8x, but in fact the original paper is 0.4x, I am puzzled about this.