Open Ze-Yang opened 4 years ago
Cfg files on Baseline-FPN is the same as this MPSR. You may need to modify the detection head for class-agnostic regression if working on the original maskrcnn-benchmark repo. Refer to our paper (section3.1) for more details. Hope this will help you.
Noted with thanks. How about the second question regarding the same data samples but different results? Thanks.
I recommand to ask the authors of Frustratingly Simple Few-Shot Object Detection: why their performance of split-2 10-shot is so bad. You can compare these results with their previous work FSRW. In split-2, their results exceed FSRW at all the shot num except 10-shot. I'm curious about this, too. Thanks.
Okay, thanks for the advice.
Hi, following your instruction, I set the regressor as class agnostic and train it on VOC_split1_base. After that, I remove the last layer of the classifier and finetune with randomly initialized one on the FSRW 5 shot samples. The results are shown in the below pic. It still exists a gap with yours. Note that I use the same training strategy with your config file. Is there anything else need to pay attention to? Thanks.
It seems the perfomance on base classes are also low. I guess something may be wrong during base training or fine-tuning. Can you show more info like:
FYI,
tensorboard --logdir xxx
.There are some different settings:
I recommend a shorter training schedule temporarily.
Following your instruction, I get the results with base mAP 70.89 and novel mAP 44.77 for split1 10shot. Log file can be accessed here. Could you please provide the code to reproduce the Baseline-FPN? Generally there might be some different details across codebase, making it difficult to reproduce the results.
Hi~ @Ze-Yang I will try to modify this repo for the Baseline-FPN exps, which may take ~1 week.
I would like to reproduce the results of Baseline-FPN. Is there any hyperparameter need to adjust? Or just using the same one. Also I notice that your Baseline-FPN results are somewhat different from Frustratingly Simple Few-Shot Object Detection while under the same few-shot training samples. Say 43.8 (yours) vs 39.0 (Frustratingly Simple Few-Shot Object Detection) on split2 10shot setting. Is it due to implementation? Thanks.