tensorflow / neural-structured-learning

Training neural models with structured signals.
https://www.tensorflow.org/neural_structured_learning
Apache License 2.0
980 stars 189 forks source link

Reproduction Issue #93

Closed wayer96 closed 3 years ago

wayer96 commented 3 years ago

Hi, I'm quite interested in your work ''Low-Dimensional Hyperbolic Knowledge Graph Embeddings'' and tried to reproduce the experimental results strictly according to the Best hyperparameters table you posted in paper. However, my results on YAGO3-10 are always about 5%-6% lower than the reported. I wonder if there are other hyper-parameters needs to adjust(like --drop_out or --reg)? Thanks a lot!

chunta-lu commented 3 years ago

@ines-chami could you help answer this question related to hyper-parameter setting? Thanks!

ines-chami commented 3 years ago

Hi, The numbers reported in the paper for YAGO were obtained using the PyTorch implementation (https://github.com/HazyResearch/KGEmb). The best hyperparameters can be found in this folder: https://github.com/HazyResearch/KGEmb/tree/master/examples. We noticed that performance slightly varied from one implementation to another and that may explain the 5% difference with TensorFlow. Thanks, Ines

chunta-lu commented 3 years ago

Thanks for the response, Ines! Closing this issue for now, but it would be good if Ines you can help run hyper-parameter sweep for TensorFlow implementation.

wayer96 commented 3 years ago

Thanks! I have used the PyTorch implementation, and find the best hyperparameters settings in folder examples. However, for YAGO3-10, there is only hyperparameter setting of RotH for 32dim. Looking forward a more complete parameter settings for YAGO3-10 in the future, thanks!