Closed HMCCMH closed 3 years ago
Hi, I trained the models for more iterations. That's a possible reason for the better numbers.
But I see that OICR(caffe) sets 70000 iters, which is equal to the PCL's OICR.
The step size is different. There may be some other details I missed. Sorry I haven't worked on WSOD for a long time.
All right, thanks.
非常感谢您的工作! 最近我想要在您的模型上测试自己的方法,想先使用pytorch1.6.0版本的pcl来训练一个baseline,于是我想把您后来添加上去的tricks一个个都删掉。 首先,我使用的是vgg16_voc2007.yaml并且置WITH_FRCNN为False,直接利用下载来的代码进行测试,得到mAP为0.5071. 再者,我删掉了pcl.py中get_proposal_clusters函数中的
ig_inds = np.where(max_overlaps < cfg.TRAIN.BG_THRESH)[0]
和cls_loss_weights[ig_inds] = 0.0
(位于242行和243行),得到mAP为0.4659. 再者,我删掉了model_builder.py中158行和159行,就是第一个refinement分支3倍loss这个trick,得到mAP=0.4638. 对比README.md中的Updates和您发的OCIR文中OCIR-VGG16的mAP=0.4120这中间还有5个点多的提升是在代码中的哪里呢?