microsoft / HittER

Hierarchical Transformers for Knowledge Graph Embeddings (EMNLP 2021)
MIT License
78 stars 16 forks source link

About fair comparison #5

Closed kbs391kbs closed 2 years ago

kbs391kbs commented 2 years ago

Hi, I have some questions about the result comparison between HittER and baselines.

  1. The reported MRR of CoKE on FB15k-237 and WN18RR are 0.475 and 0.361 in original paper, while in HittER's paper, CoKE's results are 0.484 and 0.364, respectively. So I want to know if you have rerun the CoKE code or adjust the predefiend embedding dimension ?
  2. The embedding dimension in CoKE's original implementation is 256, and other baselines like TuckER and SimplE set the dimension to 200. However, the embedding of HittER is 320, so is that fair for performance comparsion, because I cannot identify whether the performance enhancement of HittER comes from advanced model or large embedding size. Did you evaluate HittER with same embedding size or parameter amount to other baselines ?

Hope you can solve this issue, thank you!

sanxing-chen commented 2 years ago
  1. CoKE's results were directly cited from their paper https://arxiv.org/pdf/1911.02168.pdf
  2. KGE methods are usually insensitive to dimensionality changes at this scale, some studies explored this comprehensively. We also experimented with different dimensionality sizes and didn't find too much of differences.