yanxp / MetaR-CNN

Meta R-CNN : Towards General Solver for Instance-level Low-shot Learning
https://yanxp.github.io/metarcnn.html
177 stars 23 forks source link

Paper’s Result is not reproducible #31

Open mandal4 opened 4 years ago

mandal4 commented 4 years ago

As many issues in here said the results on the paper are not reproducible. The performance shows high variance along the randomly sampled k-shot images at 2nd training phase. I think the model is something unstable for the task. I think it would be better the author release the sampled k-shot images description that is used to produce results on the paper as like another FSOD paper’s author on ICCV(2019) did. (Few-shot object detection via Feature reweighting)

yanxp commented 4 years ago

As there are variances of the novel samples of phase 2, we evaluate the model five times to get the average results as our paper. Because there are no standard novel sample setting before.

Hxx2048 commented 4 years ago

@mandal4 Hello, I found the two paper are similar (FSOD and this paper). Have you reproduce the results of FSOD? I don't try FSOD because its code is based on python2.7.

mandal4 commented 4 years ago

@mandal4 Hello, I found the two paper are similar (FSOD and this paper). Have you reproduce the results of FSOD? I don't try FSOD because its code is based on python2.7.

Yes i reproduced the results on that paper with very marginal gap. The author said evaluation on matlab reproduce the results. As i mentioned in this issue, there are fixed fine-tuning samples in that paper so the results were stable.

Hxx2048 commented 4 years ago

@mandal4 Hello, I found the two paper are similar (FSOD and this paper). Have you reproduce the results of FSOD? I don't try FSOD because its code is based on python2.7.

Yes i reproduce the results on that paper with very marginal gap. The author said evaluation on matlab reproduce the results. As i mentioned in this issues, there are fixed fine-tuning samples in that paper so the results were stable.

I always felt that the idea of two papers are same (reweight meta featuers) but with different detectors(yolov2 and faster-rcnn). I would read codes in details again, thank you.

Hxx2048 commented 4 years ago

@mandal4 In addition, do you use this version of the code? (FSOD)https://github.com/bingykang/Fewshot_Detection

mandal4 commented 4 years ago

@yanxp As there are variances of the novel samples of phase 2, we evaluate the model five times to get the average results as our paper. Because there are no standard novel sample setting before.

I think, we could still select the subspace to be sampled to average. It's shame that the process is not written in your paper although the standard process is not fixed.

mandal4 commented 4 years ago

@mandal4 In addition, do you use this version of the code? (FSOD)https://github.com/bingykang/Fewshot_Detection

Yes, i used that version of the code

XiongweiWu commented 4 years ago

@mandal4 To my point, reporting average score from multiple random generated training sets is more reasonable, so I am not sure whether the randomness also matters a lot in meta-yolo paper .

NHW2017 commented 3 years ago

As there are variances of the novel samples of phase 2, we evaluate the model five times to get the average results as our paper. Because there are no standard novel sample setting before.

Sorry to interrupt, I would like to ask if you have tried adding Focal loss or a loss function with similar functions to the code? How's the effect