Closed tmp12316 closed 3 years ago
@chengchunhsu @wasidennis I am looking forward to your reply.
I am also confused about this. When using the same strategy for model selection for the source only method, AP50 is usually higher than the baseline in the paper, which makes the improvement of the paper unobvious.
@tmp12316 Can you explain how you trained source-only model for Sim10k (or KITTI or CItyscapes)? What steps should I follow?
Thank you for your amazing work!!!
I have reproduced almost all your results about GA and CA in your papers. Actually, it is really hard to select the best models. I have to save a checkpoint every 250 iters following your configs and test them one by one with some tricks. On ResNet101, I can get 45 AP50 on Kitti, 40 AP50 on CS, and 51.5 AP50 on Sim10k. I am not sure if you select the model by testing a lot of candidates.
However, if I use the same configurations and same model-selection strategies to train a source-only model on pure FCOS, I can easily get 44 AP50 on Kitti, 27 AP50 on CS, and about 47 on Sim10k!! The results on CS seem normal. We have tested both on your codes and pure FCOS environments, achieving similar results. This really confused me a lot.
Therefore, could you please tell me how you train your source-only models? Are the configurations the same as adding DA? For example, the lR is set 0.005, the training iterations for Kitti are less, and report the best results selected from every 500/ 250 iterations with some handcraft tricks?
For example, if we choose the results from 1k iter to 5k iter for Kitti on FCOS, which is a similar iteration choice for DA, the AP50 can achieve 44. Do you train all these source-only models using COCO dataset schedules and report the results of the final models? Thank you so much!