Open JaringAu opened 4 years ago
Hi @JaringAu, please check FAQs.
Thanks for your kindly help. @ZhouYanzhao
We achieve the performance using the reference model, but fail to get the comparable result using our own model (also trainaug based).
So we wonder if you could kindly share the training strategy or key hyper-parameter settings in your experiments?
Thanks.
Hi @JaringAu , how do you get the MCG proposals (w/ COB signal)?
Or do you already have any proposal you can share with me?
Thanks a lot.
Hello, @JaringAu
I am sorry to bother you. When I reproduced the result of PRM(cvpr2018), I can get the 20.8 mAP on the voc 2012 val dataset with resnet50 and 17.1 mAP on the val dataset with vgg16, a little lower than paper, did you reproduce the results?
Thank you very much!
Hi,
This is an interesting work. But we can't achieve the performance reported in PRM (mAP50: 26.8 with MCG proposals). We can only get 11.5 mAP50 with MCG proposals downloaded from https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/mcg/ and 21.5 mAP50 with COB proposals downloaded from http://www.vision.ee.ethz.ch/~cvlsegmentation/cob/code.html.
We use the default parameters of PRM (https://github.com/ZhouYanzhao/PRM/blob/pytorch/demo/config.yml) to train the classification network, (change the train_splits from trainval to trianaug, of course). But we notice that both the quality of the peaks and the instance masks are worse than those reported in the paper,
So we wonder if you use other hyper-parameter settings in your experiments?
Besides, according to our observations, the MCG proposals from https://data.vision.ee.ethz.ch/jpont/mcg/MCG-Pascal-Segmentation_trainvaltest_2012-proposals.tgz are much worse than the those shown in the paper and the supplement material. So do we need to retrain MCG with PASCAL train set to generate better proposals?
Would you please point out the differences between my experiments and yours that may results in the gap? or could you give us some advice to boost the performance?
Thanks a lot.