I tried to run your re-implementation of domain adaptive faster rcnn in adaptation from pascal voc to clipart, and I found it reached much better mAP (30 mAP) than the value reported in paper "stong weak distribution alignment" paper. (19.8 mAP)
May I ask what's your opinion on this? Why is your re-implementation much better? Or did I probably ignore some details in training script and misuse them? (I used your trainval_net_dfrcnn.py, and didn't use any rendered datasets).
Hi Zhiqiang,
I tried to run your re-implementation of domain adaptive faster rcnn in adaptation from pascal voc to clipart, and I found it reached much better mAP (30 mAP) than the value reported in paper "stong weak distribution alignment" paper. (19.8 mAP)
May I ask what's your opinion on this? Why is your re-implementation much better? Or did I probably ignore some details in training script and misuse them? (I used your trainval_net_dfrcnn.py, and didn't use any rendered datasets).
Thanks a lot for any possible help in advance!
Best, Anton