KaiyangZhou / mixstyle-release

Domain Generalization with MixStyle (ICLR'21)
MIT License
271 stars 38 forks source link

About training epochs #4

Closed Hzj199 closed 3 years ago

Hzj199 commented 3 years ago

Thanks for sharing the code. Does mixstyle need increase the training epochs like mixup?

KaiyangZhou commented 3 years ago

hmm, good question

in all our experiments, we kept the training params the same for all baselines (using the same epochs)

we didn't tune the optimization params to evaluate the impact, but this might be worth exploring

thanks!