open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.67k stars 9.48k forks source link

Box Predictions - Confidence Thresholding during Evaluation #12016

Closed laurenzheidrich closed 4 weeks ago

laurenzheidrich commented 4 weeks ago

I am trying to compute evaluation metrics on my finetuned MM-Grounding-Dino Model.

I am doing this by running

python tools/test.py $config_file $weight_file --work-dir ts_result --out ts_result/inference.pkl

This works fine and provides me with certain AP / AR results

image

When inspecting the .pkl file, though, I can see many instance predictions with decreasing confidence scores:

image

When I visualise the predicted boxes using test.py, though, I see that only bounding boxes with a confidence > 0.3 are drawn into the image:

image

So my question is, for the calculated AP / AR results, how & where can the confidence threshold be set? Maybe I only want to keep predictions with a confidence of 0.5, but I have no idea, where I could change that anywhere. Same goes for the visualizations: Where can I set, which bounding boxes are drawn?

I would gladly appreciate some help

georgeblu1 commented 2 weeks ago

Hey, whats your answer to it? Especially: how & where can the confidence threshold be set?