lixiaotong97 / DSU

[ICLR 2022] Official pytorch implementation of "Uncertainty Modeling for Out-of-Distribution Generalization" in International Conference on Learning Representations (ICLR) 2022.
141 stars 16 forks source link

Evaluation Protocol #4

Closed woojung-son closed 2 years ago

woojung-son commented 2 years ago

Thanks for your awesome research! I have some questions about the experimental settings.

Q1. Could I ask the evaluation protocol you use? Did you split the images from training domains to train : val when you search hyper-parameters? Q2. Did you also train the data in validation set when you report the final result?

lixiaotong97 commented 2 years ago

Hi, thanks for your question. As mentioned by previous papers in DG, when the testing domain is divergent to training domains, the training validation set is not reliable. Thus we follow them to adopt the last checkpoint evaluation protocal. Besides, we do not change the training data splits and maintain the same as their original repo respectively.