Closed BaruaBee closed 3 months ago
Thanks for your interest!
We use the A100 GPU. (40GB). Our latent space here is of fixed dimensionality (i.e. they are just vectors) that bs=512 is affordable. For epoch=10000, we set it as maximum and observe that around thousands the model can converge.
Thanks so much for your reply!
Hi Authors,
Congrats on the great work!
As I was reading your team's paper about 3DG, I want to reproduce your excellent work. However, I noticed that you didn't specify the GPU version used to train your model in the paper. Additionally, I observed that the default batch size is 512 and the number of epochs is 10,000. These numbers seem larger than those used in other models I have encountered before. I couldn't find the exact numbers you used to train the unconditional QM9 model in the paper. Therefore, I am wondering: What GPU version did you use to train the unconditional QM9 model, and are the batch size and number of epochs really 512 and 10,000?
Thanks!