Closed WhatAShot closed 4 years ago
Thanks for pointing that out!
I am aware of the performance differences. However, I have tried my best to make the implementation the same as the original TF version.
If possible, please report the performance difference here and I can try to resolve the difference.
@ZhenyueQin In my exp with your codes (main_gan.py with default hyperparameters), the unique is very low (about 0.2~0.4), and the training process is not stable (NAN raise). Is there any advice ?
I witness NAN happen as well. However, they don't consistently persist. I will investigate low uniqueness. Maybe now try different lambda_wgan.
Thank you, by the way, one of the my result: valid: 100.00, unique: 0.40, novel: 100.00, NP: 1.00, QED: 0.54, Solute: 0.43, SA: 0.39, diverse: 0.98, drugcand: 0.60
It seems that Solute is also low.
@ZhenyueQin I sadly found it seems meaningless to discuss the performances.... As authors stated in the paper, "Although the use of WGAN should prevent, to some extent, undesired behaviors like mode collapse, we notice that our models suffer from that problem. We leave addressing this issue for future work." I consider that the performances reported in the paper are not creditable.
Have you ever compared the performances to the official TF version?
I found the performance seems a little bit lower than TF.