fanglanting / RA-retrofit

A Pytorch implementation of "Knowledge-Enhanced Ensemble Learning for Word Embeddings" (WWW 2019)
37 stars 4 forks source link

Could you share your pre-processed data of pre-trained word embedding and cleaned knowledge graph? #1

Open Zziwei opened 5 years ago

Zziwei commented 5 years ago

Thanks for the interesting work and sharing your code. Could you also share your data of word embeddings you used and cleaned knowledge graph data?

fanglanting commented 5 years ago

Cleaned word embeddings and knowlege graphs we used: https://www.dropbox.com/s/75cccrsve5tnq8l/data.zip?dl=0

Pretrained RA-retrofit model: https://www.dropbox.com/s/tqj1k4dox1cir5e/RA-retrofit?dl=0

Zziwei commented 5 years ago

Thanks for the data and pretrained mode. I still have some questions:

1) How can I load the pretrained RA-retrofit model? I tried 'model.load_state_dict(torch.load(PATH))', but it reported an error.

2) What is the hyperparameter setting for producing the best performance? I tried many different combinations of gamma and learning rate, but the results are still far from the ones in the paper.

Zziwei commented 5 years ago

Besides, what is the epoch number for training a good performance model?

Zziwei commented 5 years ago

I figured out the pretrained model problem... I didn't realize it is the word embeddings file. But I am still not sure the hyperparameter setting for producing this result.

fanglanting commented 5 years ago

Please find the hyperparameter setting in our paper