Closed mandal4 closed 4 years ago
Hi,
The results in this paper are obtained by running experiments with randomly selected support images for each experiment. The support images used in few-shot fine-tuning stage are generated here and there are filtered here to make sure there are only K annotated objects for each novel class.
Briefly, we run the few-shot fine-tuning stage for 10 times with different support images and then average the results over these experimental runs (similar to what has been done in a recent ICML paper TFA). Thus we do not use the same sample as MetaYOLO their results are only based on a single experimental run.
Hi,
I noticed that the reported result for ICML paper(TFA) in your article is so different than what they reported. It seems that reported results in both articles were calculated based on getting average in multiple runs in fine tuning stage.
Could you please explain how you produce the result for TFA?
Best
Hi,
I noticed that the reported result for ICML paper(TFA) in your article is so different than what they reported. It seems that reported results in both articles were calculated based on getting average in multiple runs in fine tuning stage.
Could you please explain how you produce the result for TFA?
Best
Hi,
I actually report the same results as TFA (cf. Table 7 and Table 8 in their paper). The results in Table 1 of TFA are NOT calculated on averaging multiple runs, they simply follow the paper FSRW to use a fixed support set. For TFA's results on multiple runs, you can refer to the Section 4.2 (Generalized few-shot object detection benchmark) for more details.
Thanks for a nice paper! I have two question.
In a paper, you mentioned 'Results averaged over multiple random runs'. I'd like to ask you which of sample do you used to calculate the 'averaged results'
Is there experimental result with a same sample that is used by MetaYOLO(ICCV2019)