jiaxi-wu / MPSR

Multi-scale Positive Sample Refinement for Few-shot Object Detection, ECCV2020
MIT License
135 stars 18 forks source link

Perf averaged over multiple experimental runs #10

Closed YoungXIAO13 closed 4 years ago

YoungXIAO13 commented 4 years ago

Hi,

Thx for sharing the code. I have just one question about the evaluation protocol of few-shot object detection.

As proposed in the recent paper TFA, where the performances are computed as an average of multiple experimental runs with random support images, have you also done this in your work and report the averaged results somewhere ?

jiaxi-wu commented 4 years ago

Hi, @YoungXIAO13 We just evaluate our method on datasplits provided by FSRW (also in TFA's section 4.1) and our natural sampling strategy (you can find them at tools/fewshot_exp/datasets/voc_sample_series.py).

The Generalized few-shot object detection benchmark is a new contribution of TFA which shows the large sample variance that we ignored before. However, their paper was available at arxiv after our paper submission. So we did not add this evaluation into our work.

If you are curious about how well our method works on this generalized benchamrk, please evaluate it by yourself since we don't have resources for this temporarily. Thanks & looking forward to your reply.

YoungXIAO13 commented 4 years ago

Thanks for your reply. Another question I'm wondering is that do you exclude all the training samples containing novel classes in the base training stage, or do you treat novel objects as background as mentioned here ?

jiaxi-wu commented 4 years ago

I exclude images containing novel samples in the base training stage (tools/fewshot_exp/datasets/voc_create_base.py).