thuml / Transfer-Learning-Library

Transfer Learning Library for Domain Adaptation, Task Adaptation, and Domain Generalization
http://transfer.thuml.ai
MIT License
3.35k stars 551 forks source link

Cannot repeat the result Fine-tune the unsupervised pre-trained model in task_adaptation/image_classification #156

Closed ReloJeffrey closed 2 years ago

ReloJeffrey commented 2 years ago

For fine-tune the unsupervised pre-trained model, Only the erm reult can be repeat, the co_tuning、bi_tuning accuray is lower than erm

thucbx99 commented 2 years ago

Can you give a more detailed description, such as the experimental dataset and the proportion of labeled data. And which version of pytorch are you using? It’s suggested to use pytorch==1.7.1 and torchvision==0.8.2 in order to reproduce the benchmark results.

ReloJeffrey commented 2 years ago

Can you give a more detailed description, such as the experimental dataset and the proportion of labeled data. And which version of pytorch are you using? It’s suggested to use pytorch==1.7.1 and torchvision==0.8.2 in order to reproduce the benchmark results.

I have try to run erm、co_tuning in Fine-tune the supervised pre-trained model and MoCo (Unsupervised Pretraining) for CUB-200-2011、Standford Cars、Aircrafts. I just run the .sh provided and I found all the results in the Fine-tune the supervised pre-trained model can be repeated. But for MoCo (Unsupervised Pretraining), only erm can be repeated.The results for co_tuning is lower than erm in all three dataset and all the proportion of labeled data. I have use torch==1.7.0 with torchvision==0.8.0 and pytorch==1.7.1 with torchvision==0.8.2 both.

thucbx99 commented 2 years ago

Thanks for providing the details. I'm running these experiments again.

thucbx99 commented 2 years ago

I found out that this is because an important command line option --finetune is missing from the provided script. By specifying --finetune we apply a 0.1x learning rate to the backbone and are able to reproduce the results.