Open bobokeley opened 4 years ago
Hi @bobokeley ,
I actually randomly select the support data for each experimental run and report averaged results over multiple runs, which are directly comparable to the Tables 7 & 8 in TFA (the results of FSRW are also reported).
Otherwise, you can find the image list chosen in FSRW and modify this function to obtain the same support data as them. A simple solution could be modify the randomly shuffled image ids as the specific image ids chosen in FSRW. Once this prndata is successfully generated, you're good to train the model and compare the results with them.
Thanks for your reply! I'll try it right now!
In your provided test scripts, you set meta-test=True. Is this really directly comparable to TFA? Correct me if I'm wrong.
Hi @bsun0802 ,
This "meta_test" simply means that class-level feature vectors is used in testing, which are extracted from the support data. And the name "meta_test" is copied from Meta RCNN.
Extracting class features from support data and use them in testing is a common setting in Few-Shot Object Detection (FSRW, Meta RCNN). These papers are also compared in TFA, I do not see the specific reason why that's not really comparable to TFA.
Great, thanks for your reply.
The name meta-test naming is a bit confusing.
And then your work is indeed the current SOTA 👍
Hi, I am wondering if the data in VOC and COCO for finetuning is the same as FSRW, and if not, how would you suggest to modify the code to adapt to their data settings?
Hello, i got same interest with you. Did you have done evaluation with same data sample with FSRW? If you did it, could you share the results?
Hi, I am wondering if the data in VOC and COCO for finetuning is the same as FSRW, and if not, how would you suggest to modify the code to adapt to their data settings?