wangchen1801 / FPD

Official code of the paper "Fine-Grained Prototypes Distillation for Few-Shot Object Detection (AAAI 2024)"
https://arxiv.org/pdf/2401.07629.pdf
27 stars 4 forks source link

How to reproduce the "average results over multiple runs"? #5

Closed FrankLeeCode closed 4 months ago

FrankLeeCode commented 4 months ago

To my understanding, the "Average results over multiple runs" are derived from utilizing 30 distinct sample seeds, as demonstrated in the data split. However, upon examining the re-organized data split, I noticed only one seed annotation provided (as indicated here). Consequently, I can only obtain results from a single run. How can I replicate the process to achieve the "Average results over multiple runs"?

wangchen1801 commented 4 months ago

Thank you for your interest in our work! Currently, mmfewshot only support one fixed few shot dataset. To evaluate the samples generate from different seeds, we can replace the .txt in benchmark_[K]shot mannuly, and modify the img path, for example:

datasets/VOC2012/JPEGImages/2008_006761.jpg -> VOC2012/JPEGImages/2008_006761.jpg 

We can also register the different samples into FewShotVOCDataset in mmfewshot/mmfewshot/detection/datasets/voc.py, as the following code:

    voc_benchmark = {
        f'SPLIT{split}_{shot}SHOT': [
            dict(
                type='ann_file',
                ann_file=f'data/few_shot_ann/voc/benchmark_{shot}shot/'
                f'box_{shot}shot_{class_name}_train.txt',
                ann_classes=[class_name])
            for class_name in VOC_SPLIT[f'ALL_CLASSES_SPLIT{split}']
        ]
        for shot in [1, 2, 3, 5, 10] for split in [1, 2, 3]
    }
FrankLeeCode commented 4 months ago

Thank you for your prompt response! I truly appreciate it. Your explanation has resolved my query perfectly.