feipanir / IntraDA

Unsupervised Intra-domain Adaptation for Semantic Segmentation through Self-Supervision (CVPR 2020 Oral)
https://arxiv.org/pdf/2004.07703.pdf
MIT License
271 stars 35 forks source link

is there a batch size of 1 per gpu? #2

Closed seominseok0429 closed 4 years ago

seominseok0429 commented 4 years ago

Your research has inspired me very much. So I'm trying to reproduce this experiment, is there a batch size of 1 per gpu?

feipanir commented 4 years ago

Yes, we use batch_size=1 during training process. More batches are allowed if the image size is smaller.

seominseok0429 commented 4 years ago

If so, did all hyper-lamatenors keep the config in ADVENT?

feipanir commented 4 years ago

you are right, we use the same configuration as in AdvEnt.

seominseok0429 commented 4 years ago

Thank you for your kind answer. Your research has inspired me very much. Thank you for your great research.