Closed 1224wxwx closed 2 years ago
Hi, could you try these settings?
splitting | 1% | 2% | 5% | 10% |
---|---|---|---|---|
#Burn-in epoch | 400 | 100 | 40 | 20 |
#Total epoch | 1000 | 800 | 400 | 200 |
Hi, could you try these settings?
splitting 1% 2% 5% 10%
Burn-in epoch 400 100 40 20
Total epoch 1000 800 400 200
Thank you for your replay! It seems that the epochs are much more than the original settings in unbiased teacher. Does it means that DeformDetr is much more difficult to train than anchor-based model?
For unbiased teacher, 180000 iterations, batch size is 32 for labeled and unlabeled. On 10% setting, if we compute the epoch from the perspective of labeled samples, it is around (180000 32)/(118287 0.1) = 487 epochs. If from the perspective of unlabeled samples, around (180000 32)/(118287 0.9) = 54 epochs. Our method by Deformable DETR needs 200 epochs. So it is hard to say which one is longer.
OK, thanks
Hi thanks for sharing your great project!
If I wanner reproduce the SSOD result (in Table 3), could you please tell me how to set the epochs? Will it still be 150, just like you wrote in r50_ut_detr_omni_coco.sh?
I’m looking forward to hearing from you.