IBM / e2r

Code for our NeurIPS 2019 paper "Quantum Embedding of Knowledge for Reasoning" (authors: Dinesh Garg, Shajith Ikbal, Santosh K Srivastava, Harit Vishwakarma, Hima Karanam, L Venkata Subramaniam) And Code for our NeurIPS 2020 paper "Inductive Quantum Embedding" (Authors: Santosh Srivastava, Dinesh Khandelwal, Dhiraj madan, Dinesh Garg, Hima Karanam, L Venkat Subramaniam)
Apache License 2.0
25 stars 14 forks source link

can not reproduce the reported results #3

Open jiag19 opened 4 years ago

jiag19 commented 4 years ago

Hi there,

I tried to run 'reasonE.train.py' on wn18 dataset for fresh training, following the orginal parameter setting, which are:

embedDim = 100 lossMargin = 1.0 negSampleSizeRatio = 3 learningRate = 0.01 nIters = 3000 batchSize = 10103

But the results are very poor, could you please help to show more details for reproducing your results? Thanks. @shajithikbal @dgarg77 @stevemar

shajithikbal commented 4 years ago

Hi, thank you for your interest in our work. I see from your comment that you are training only using reasonE.train.py with a high value for learning rate. Training in our setup is typically performed starting with reasonE.train.py followed by multiple steps of reasonE.retrain.py.. by setting learning rate at initial value say 0.01 in reasonE.train.py for a smaller number of iterations.. and gradually decreasing the value of learning rate (such as 0.001, 0.0001) within reasonE.retrain.py over multiple steps (of relatively smaller number of iterations).