Closed seominseok0429 closed 4 years ago
Yes, we use batch_size=1 during training process. More batches are allowed if the image size is smaller.
If so, did all hyper-lamatenors keep the config in ADVENT?
you are right, we use the same configuration as in AdvEnt.
Thank you for your kind answer. Your research has inspired me very much. Thank you for your great research.
Your research has inspired me very much. So I'm trying to reproduce this experiment, is there a batch size of 1 per gpu?