Open austinmw opened 1 year ago
rpn_pseudo_thr=0.5
Thanks, will try this now and close thread if performance improves.
I can't find more info for this, it seems microsoft repo also sets to rpn_pseudo_thr=0.9
, is this in paper? And is the default of 0.9 meant for one of the other experiments?
@Czm369 Hi, I tried your suggestion, but the performance seems to still drop after ~30K iters. After running test.py on the best checkpoint, the score I got for fold 1 is only 17.2 mAP
HI @austinmw did you investigate this any more? Or call it a day?
lr=0.01 by 8 GPUs, auto_scale_lr = dict(enable=True, base_batch_size=40) may bring new problems (for MeanTeacherHook).
Hi, I'm attempting to replicate the Soft-Teacher performance of 20.46 mAP reported in the paper for 1% of labeled COCO data, however, I'm getting lower performance. Any help about if I misunderstood or misconfigured something, or am evaluating the wrong way is greatly appreciated.
I used the provided download and data split scripts to prepare my coco data into 5 folds, and have so far trained on 4/5 of the folds. I'm using the config
configs/soft_teacher/soft-teacher_faster-rcnn_r50-caffe_fpn_180k_semi-0.01-coco.py
. Here's the performance I'm currently seeing:The val mAP seems to start going down at around 30K iters and I stopped the runs at around 100K/180K total iters.
Here's my full config file: