Closed yeondukim closed 2 years ago
Hi YKim,
Thanks for your interest in our work!
It is possible that training for more epochs leads to better results. In our work, we set the maximum epoch as 100 due to the limitation of computational resources and consistency with supervised GCN/CIN baselines.
Hope this helps.
Best, Yuyang
Thank you for your sincere reply! :) It could be helpful.
YKim
Hi @yuyangw , thanks for sharing your nice works! :)
I've pre-trained the mix-aug GIN model from scratch, and got the finetuned results on QM7 database.
In supported yaml files, the number of max epoch is set to 100. BTW, when I checked tensorboard logs, the training seemed not to be sufficient. So I've tried to finetune the models for 1k epochs on QM7, and got MAE results as 63.4±0.89 for 3 runs (c.f., 87.2±2.0 in the paper).
(pick the model when the valid metric shows minimum value) Have you ever experienced about finetuning the models for more epochs?
Here are the detailed configuration for finetuned results: (trained on Tesla V100-SXM2-32gb)
Thanks in advance! :)
Sincerely, YKim