lorebianchi98 / FG-OVD

[CVPR2024 Highlight] Official repository of the paper "The devil is in the fine-grained details: Evaluating open-vocabulary object detectors for fine-grained understanding."
https://lorebianchi98.github.io/FG-OVD/
45 stars 3 forks source link

About Parameter Setting for Inference #6

Closed ysysys666 closed 1 month ago

ysysys666 commented 2 months ago

Hello author, Thank you for your excellent work. I tried to reproduce the evaluation results of the model in your paper. Here I have two questions to ask you,hope to get your reply: When I use evaluate_map.py, what is the value of simplify_errors, disable_nms, remove_pacco, evaluate_all_vocabulary set to?

When I set both of them by default, the evaluation result of cora is higher than that in the paper, is there anything wrong with me, that n_hardnegatives is set to 5 and 2 respectively under both Settings?

lorebianchi98 commented 2 months ago

Hi! Thank you for your interest in our work!

When running evaluate_map.py, it is correct that these parameters should be set to their default values:

Given these settings, your results for CORA should closely match those reported in the paper (using n_hardnegatives=5 for Difficulty-based benchmarks and n_hardnegatives=2 for Attribute-based benchmarks). If your results differ slightly but are still in close range, this could be due to minor numerical instabilities and shouldn't be a cause for concern.

I hope this helps!