pykeen / benchmarking

📊 Results from the reproducibility and benchmarking studies presented in "Bringing Light Into the Dark: A Large-scale Evaluation of Knowledge Graph Embedding Models Under a Unified Framework" (http://arxiv.org/abs/2006.13365)
MIT License
35 stars 4 forks source link

Skyline Plot: Performance (e.g. H@1) vs. Model Size #11

Closed mberr closed 4 years ago

mberr commented 4 years ago

The simplest way to get the number of parameters post mortem would most likely be to re-instantiate the model based on the config, and use https://github.com/mali-git/POEM_develop/blob/b208de2475865a008d608ab739a66b949912b0a3/src/pykeen/models/base.py#L586-L589

mberr commented 4 years ago

Similar to https://arxiv.org/pdf/1905.11946.pdf Figure 1

mberr commented 4 years ago

The same would be interesting for training time vs. performance, cf. #7

mberr commented 4 years ago

@cthoyt Additional ways to improve readability of the plots (last seen: e3f294fb000486e32af8c1440fe32c0104eb60ef ):

mberr commented 4 years ago

I suppose we can close this one, @cthoyt