Closed Nathan-Li123 closed 1 month ago
Hi! Thanks for the question.
If I understand correctly, you are referring as to the "class names (aka, vocabulary)". You do not need to split Base/Novel for evaluation. For the datasets used in this project, iNat and FSOD contain only novel classes (since models are trained on LVIS, and ImageNet or COCO-Caption, they've never seen iNat and FSOD).
If you want to customize the project to your own dataset, you do not have to input the split neither. The output evaluation results contain both per-class AP and overall mAP. So, if you wanna know the performance on base or novel specifically, you can simply do a post-process to sum base or novel classes' APs together and average them as mAP_novel = (AP_1 + ... + AP_N) / N
, where AP_n is the per-class AP for either novel or base split, and N is the number of base or novel classes.
thanks.
During the evaluation, do we need to input all the novel classes from the validation set into the model for inference, or should we input the novel classes for each sequence in the validation set separately? Looking forward to your response.