Thank you so much for releasing the code and sharing this interesting work!
It seems your method is comparable with Co-Tuning [1] (please correct me if not), but I cannot find comparative results.
Could you please let me know if DR-Tuing outperforms Co-Tuining and intuitively explain how this method is better?
That will be very helpful to my understanding.
Thank you a lot inadvance for your time.
Looking forward to your reply.
[1] Kaichao You, Zhi Kou, Mingsheng Long, and JianminWang. Co-tuning for transfer learning. In NeurIPS, 2020. 3.
Hi, thanks for your attention!
Compare DR-Tune with Co-tuning:
From performace perspective, I recommend to replace the self-supervised pre-trained ResNet-50 used in Table 1 to supervised pre-trained ResNet-50 to have a fair comparation with Co-tuning. The results may be given subsequently depending on the timetable.
From generalizability perspective, it seems co-tuning can not be applied to self-supervised pre-trained models.
Hello authors,
Thank you so much for releasing the code and sharing this interesting work! It seems your method is comparable with Co-Tuning [1] (please correct me if not), but I cannot find comparative results. Could you please let me know if DR-Tuing outperforms Co-Tuining and intuitively explain how this method is better? That will be very helpful to my understanding. Thank you a lot inadvance for your time. Looking forward to your reply. [1] Kaichao You, Zhi Kou, Mingsheng Long, and JianminWang. Co-tuning for transfer learning. In NeurIPS, 2020. 3.