tntek / source-free-domain-adaptation

127 stars 5 forks source link

Cannot reproduce the results of CLIP ResNet backbone #3

Closed pilsHan closed 5 months ago

pilsHan commented 5 months ago

Hello, thanks for your great work. While I was able to reproduce the results for B-32, but found that it is hard to reproduce the results for RN.

I modified the ARCH : ViT-B/32 to RN50 in the config file for the RN experiment, and achieved a result of 50.56% for the office-home A->C task (paper : 62.6%).

In particular, in target domains such as office31-amazon and office-home-clipart, the accuracy of the zero-shot CLIP model was measured about 20%...

I wonder if I miss something? I appreciate your help.

HazelSu commented 5 months ago

The normalization values of image transform for RN and B-32 are different. I have updated the code. You can re-git this project.