tim-learn / SHOT

code released for our ICML 2020 paper "Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation"
MIT License
437 stars 77 forks source link

Not able to reproduce Open Set numbers #24

Closed roysubhankar closed 3 years ago

roysubhankar commented 3 years ago

Hi,

Thanks for making the code public.

I tried to reproduce the numbers for open set setting of OfficeHome and the numbers I get are way less than what you report in the paper. I have already tried several torch and torchvision environments but every environment is giving the lower numbers.

Is it possible for you to upload the model checkpoints for open set source only? Then I hope to reproduce the numbers for SHOT-IM and SHOT with your source only checkpoints (source_F.pt, source_B.pt and source_C.pt). It will indeed be very helpful.

Thanks in advance.

tim-learn commented 3 years ago

Hi,

Thanks for making the code public.

I tried to reproduce the numbers for open set setting of OfficeHome and the numbers I get are way less than what you report in the paper. I have already tried several torch and torchvision environments but every environment is giving the lower numbers.

Is it possible for you to upload the model checkpoints for open set source only? Then I hope to reproduce the numbers for SHOT-IM and SHOT with your source only checkpoints (source_F.pt, source_B.pt and source_C.pt). It will indeed be very helpful.

Thanks in advance.

Hi, Roy, so the source only model you trained behaved worse than that in our paper? If so, I would send these models to your email.

Best

roysubhankar commented 3 years ago

Hi @tim-learn , yes the source models behaved worse than in your paper. If you could please me to this email subhankar.roy@unitn.it then it will be great. Thank you.

roysubhankar commented 3 years ago

Hi @tim-learn , a gentle reminder to please send me the checkpoints to the above email address as we discussed.

tim-learn commented 3 years ago

Hi @tim-learn , a gentle reminder to please send me the checkpoints to the above email address as we discussed.

Hi, Roy. Sorry for the delay. I have re-trained SHOT today and the average accuracy of ODA (OfficeHome) is 73.0%. The associated models are uploaded in https://drive.google.com/drive/folders/14GIyQ-Dj7Mr8_FJdPl4EBhFMgxQ2LXnq. You can try again and tell me whether it works well for you.

Best

roysubhankar commented 3 years ago

Hi @tim-learn , thank you for sending the checkpoints.

I used your source trained checkpoints and simply used them to compute the source-only numbers for ODA. The numbers I get are very poor (almost like random guess). I am using the default run command for ODA. Attaching it below. Screenshot 2021-10-26 at 14 55 34

I am not sure why the numbers are so bad. When I was training my own model, the numbers were better (however not close to what you report but decent numbers). I was wondering if there is some difference in the dataset list.txt or something different in the installed packages.

Is it possible to share the dataset_list.txt for each domain of Office-Home which you used for the experiments and the list of packages (and their versions) used in the experiment? It would be very helpful. Thank you again.

tim-learn commented 3 years ago

Hi @tim-learn , thank you for sending the checkpoints.

I used your source trained checkpoints and simply used them to compute the source-only numbers for ODA. The numbers I get are very poor (almost like random guess). I am using the default run command for ODA. Attaching it below. Screenshot 2021-10-26 at 14 55 34

I am not sure why the numbers are so bad. When I was training my own model, the numbers were better (however not close to what you report but decent numbers). I was wondering if there is some difference in the dataset list.txt or something different in the installed packages.

Is it possible to share the dataset_list.txt for each domain of Office-Home which you used for the experiments and the list of packages (and their versions) used in the experiment? It would be very helpful. Thank you again.

How abut the performance for other settings like closed-set UDA or partial-set UDA in this code? If these results are okay, it seems that both the versions of different library packages and the data list files are okay.

roysubhankar commented 3 years ago

HI @tim-learn , now the numbers from your paper match when I re-run your code. The problem was in the file list actually. We were using different file lists for open set and thats why the numbers were different. Thanks for you help. Closing the issue.