HazyResearch / KGEmb

Hyperbolic Knowledge Graph embeddings.
250 stars 47 forks source link

can you provide the detailed hyperparameters for reproducing the results on FB237 and Yago #6

Closed cheungdaven closed 4 years ago

cheungdaven commented 4 years ago

Hi Ines,

I ran the rotH method with the hyperparameters you provided for FB237, but the results are much worse than the reported ones.

Do you mind sharing the detailed hyperparameters setting on these two datasets as well?

Best, Shuai

ines-chami commented 4 years ago

Hi Shuai,

What commands are you running? Which dimension do you need the hyperparameters for?

All the best, Ines Chami

cheungdaven commented 4 years ago

For dimension 32. I used the following: learning rate=0.1 optimizer=Adagrad batchsize=100 negsamples=50

other hyperparameters are the same as the WN18RR.

I will appreciate it if you can give me the hyperparameters on FB237 and Yago with dimension 32.

ines-chami commented 4 years ago

For FB237, running the command below achieves the following scores (as reported in the paper): test MR: 182.06 | MRR: 0.314 | H@1: 0.223 | H@3: 0.344 | H@10: 0.496

python run.py --dataset FB237 --model RotH --rank 32 --regularizer N3 --reg 0.0 --optimizer Adagrad --max_epochs 300 --patience 10 --valid 5 --batch_size 500 --neg_sample_size 50 --init_size 0.001 --learning_rate 0.05 --gamma 0.0 --bias learn --dtype double --multi_c

I do not have the logs saved for YAGO3-10 since the experiments were done on a different machine, I will need to rerun before sending the detailed command (the hyper-parameters used in the paper should work).

cheungdaven commented 4 years ago

Thank you very much for the info.

Best.

ines-chami commented 4 years ago

About YAGO3-10, running the command below achieves the following results (better than our original runs on all but one metric): test MR: 3097.84 | MRR: 0.400 | H@1: 0.324 | H@3: 0.436 | H@10: 0.546

python run.py --dataset YAGO3-10 --model RotH --rank 32 --regularizer N3 --reg 0.0 --optimizer Adam --max_epochs 500 --patience 20 --valid 5 --batch_size 1000 --neg_sample_size -1 --init_size 0.001 --learning_rate 0.0005 --gamma 0.0 --bias learn --dtype single --multi_c

I added these config files to the examples/ folder if needed.

swt-user commented 3 years ago

Hi Ines,

Do you mind sharing the detailed hyperparameters setting on FB237 for dimension 500? Using the hyperparameter reported in the paper, I can't reproduce the same result.

All the best, wentao