Closed AndRossi closed 5 years ago
Unfortunately, I did not record the exact number of epochs. The logs have some detail, but they are outdated since they came from before the bugfix in #18. However, the logs might still show you a reasonable number of epochs to run for a model. Note that some logs are split into multiple runs (FB15k) where I resume the checkpoint and train a bit more multiple times.
For the evaluation procedure, I would record the test score associated with the highest validation score. The number of epochs changes from dataset to dataset but usually I run until I do not see any reasonable increases in validation set performance. For some datasets this can be a lot of epochs, for example, I think on FB15k it ranged into about 600 epochs or so since you see still tiny but steady improvements after many epochs. For FB15k-237 you see a maximum validation score at a lot smaller number of epochs — I do not remember the exact number, but it was in the range of 40-60 epochs I think.
Let me know if you need more info in general or on any dataset.
Hi Tim, thank you for your prompt reply! At the moment I'm running a training on FB15K; I'm going to check what happens around 600 epochs then.
I'm going to close this issue (since there is not a real "issue" with the model). In case I need more info, can I contact you at the email address in the ConvE paper?
Thanks again for your kindness!
Hello, I am a PhD Student at Roma Tre University and I'm working on a comparative analysis among link prediction models.
I have really appreciated your paper "Convolutional 2D Knowledge Graph Embeddings" and I would like to add ConvE to my experiments. I am trying to replicate your results and I have started training with the configuration you describe in your readme.MD:
Unfortunately I can not find any details (either in the readme or in the paper) on the termination condition you have used in your training. Did you just stop after a certain number of epochs? If so, how many? Thanks in advance for your help!
Andrea