Open luoluo123123123123 opened 1 year ago
Actually, this is what the paper talks about. If you model is trained on one type and evaluated on another type, it will definitely cause overfitting. So, ori+type1 will overfit on type2,3,4,5,6,7,8. To alleviate the problem caused by domain gap (overfitting), the paper proposes DG-YOLO.
thanks,but the wired thing is that ,ori+type1 can improve the mAP in ori. you said ori+type1 cause overfit on type2~8, but it still not bad on ori validation
It is obvious that ori+type1 can improve the mAP in ori. The training and the evaluation are in the same domain (both training data and test data contain ori). type1 increases the data diversity of the training data. So it can perform better on ori_val.
I'm sorry, I try URPC2019 as ori, and type1=URPC2019+WQT (output scale = 600),and I use faster rcnn in detectron2, it can imporve the generalization ability in other types as the paper says, but when I test on URPC2019test (ori test),there is 2 mAP drop, I don't know if I set the wrong setting?Maybe the size should be 416?
Are the training size and the test size the same? If not, there will be a performance drop. Your WQT output size is 600, so you better not use a training size higher than 600.
Oh,I forgot this point ,thanks a lot. May I ask two more questions in your paper? 1.As you said ,ori + type7 can improve 2mAP, an ori+type4 can also imporve 2 mAP , but why ori+type7+type4 do not improve 4mAP in ori? 2.Full WQT can imporve unseen type8 from 16 to 30, that means, for instance,we have a starfish in dark green,light green,light blue from WQT,but detector not only know starfish from those,but also generalize in type8, does it means WQT learns domain invariant information from dataset? If that is , why we still use DG-YOLO? You said DG-YOLO abandons the domain-related information tries to learn domain invariant information.That means,WQT just learn domain-related information,we put green starfish,and detectors can only learn green starfish,but WQT can generalize to unseen domain,and not drop in ori, why WQT can learn domain invariant information without abandons the domain-related information?
我猜你应该是一个中国人,我用中文回复好了,清晰一点。
同学可以参考我的最新工作,希望能够给你带来启发:https://github.com/mousecpn/DMC-Domain-Generalization-for-Underwater-Object-Detection
in Table 1, the WQT is used to create type1 data. Does ori+type1 mean that you put ori (4000 images) and also type1 (4000 images) total 8000 images of fully superviesed learning? Does it cause overfitting?