Open 0x366 opened 4 years ago
Thanks for your attention! We used 8 gpus to trained our model and the batch size was set to 32. Our model is trained for fifty epochs in both the first and second stages. Other hyperparams are the same as hyperparams used by celeba-hq which has been given in the default configuration of the code.
Thanks!
May I ask you to also provide multi-gpu training code? (I primarily use Pytorch and seems like the code in repo is using only cpu. I tried adding with tf.device
to main for loop - but it didn't affect training time)
Hello, nice paper! We are willing to publish our paper (regarding inpainting) and need to compare our model with yours. We have different strategy for mask generation and therefore to make comparison fair, we need to retrain your model on our masks. Can you specify how many gpus, what hyperparams (argsparse args) do you used and what was the training process for Places2?
Is it also possible, that you share multigpu code ?