Closed habibian closed 3 years ago
Hi,
I have two hypothesis as to why that might be. First of all, you have to make sure to ignore the detection in the ignore regions for each sequence. That will have an impact for sure. Second, you have to do the evaluation with the full image resolution, the "--keep_res" parameter. I ran the evaluation in a VM using Octave, maybe you can set up something similar? Hope that helps!
Thanks for the prompt response :)
The keep_res flag is set and the predictions are made over the whole image (as in your implementation). And I am evaluating over all of the predicted and ground-truth boxes. So I agree that the difference is probably coming from the ignore regions.
Very helpful! thanks.
Hey,
I have a question on the difference in APs computed by COCO-API vs computed by the official MATLAB evaluation tool:
Using COCO-API to compute the AP@IoU=0.7, I get ~78 on test sequences (test_b.json), which is lower than what has been reported using MATLAB evaluation tool.
I am evaluating your released model: 'ua-detrac_model_best.pth' so no training involved.
I cannot run MATLAB tool as it is not compatible with my OS.
Thanks :)