lhoyer / MIC

[CVPR23] Official Implementation of MIC: Masked Image Consistency for Context-Enhanced Domain Adaptation
261 stars 40 forks source link

Some questions about another domain adaption dataset performance in object detection? #45

Closed Victory8858 closed 1 year ago

Victory8858 commented 1 year ago

Thank you for your excellent work! As shown in the paper, only the result of cityscapes->fog-cityscapes dataset has been given, and recently I use some other datasets like sim10k->cityscapes, kitti->cityscapes. The first one can get a good result and can clearly show the good effect of mask augmentation and domain adaption head, the car AP can get 60%, and can get sota compared to other works. But when I use the second one (kitti->cityscapes), the car AP in mic raw can get about 46%,but when I set the three parameters of da_heads to 0.0. The result of car AP can gain into 48%, I don't know how to explain this phenomenon. What's more, the result of kitti->cityscapes can not reach sota like the other two(cityscapes->fog-cityscapes and sim10k->cityscapes) , wish for your reply, thank you!

Victory8858 commented 1 year ago

Thanks for your reply, can you help me?

krumo commented 1 year ago

Hi sorry for the late reply. I didn't try training it on Kitti->Cityscapes task, but when I play with other UDA tasks, I also observe the similar performance drop. As our implementation of MIC for detection is based on SADA method which relies on some hyperparameters(e.g. MODEL.DA_HEADS.DA_IMG_GRL_WEIGHT, MODEL.DA_HEADS.DA_INS_GRL_WEIGHT) to balance the task-specific feature learning and domain alignment, different UDA tasks may require different hyperparameters and the MIC branch may also alter the optimized hyperparameters. This is the limitation of base SADA method.

Empirically I find that the image-level DA head would benefit the adaptation and the instance-level DA head may require a lower loss weight to be beneficial to the performance. Thus, I would suggest try setting different loss weight at Line274-276 on your specific task. In essence, I believe the adversarial DA head is helpful for domain alignment. However, it is important to carefully fine-tune the hyperparameters to prevent the potential adverse effects on feature learning.