pykeen / benchmarking

📊 Results from the reproducibility and benchmarking studies presented in "Bringing Light Into the Dark: A Large-scale Evaluation of Knowledge Graph Embedding Models Under a Unified Framework" (http://arxiv.org/abs/2006.13365)
MIT License
35 stars 4 forks source link

Add published results in machine-readable format #16

Closed mberr closed 4 years ago

mberr commented 4 years ago

Towards generating the tables for comparing published vs. obtained results automatically

cthoyt commented 4 years ago

Can we include these in the configs inside PyKEEN itself? I already have a PR for that https://github.com/mali-git/POEM_develop/pull/431

mberr commented 4 years ago

Maybe? :sweat_smile: I just finished typing all these numbers from the paper, but double checking with the publications seems to be necessary anyway.

mali-git commented 4 years ago

Can we include these in the configs inside PyKEEN itself? I already have a PR for that mali-git/POEM_develop#431

Seems to be a good idea!

cthoyt commented 4 years ago

But the major question of whether the papers were reporting average, pessimistic, or optimistic rankings is still open, right?

mberr commented 4 years ago

They should be linked here: https://arxiv.org/abs/2002.06914

mberr commented 4 years ago

@mali-git @cthoyt I close this one, since it was added to Pykeen