Closed tispratik closed 4 years ago
Hi @tispratik are you able to solve your issue? as i am facing same issue.
The framework uses pycocotools. For now I have the pycocotools code and changed the values to what I need. The values are based on the original image height and width. Search for class Params
in cocoeval.py
.
Thanks @tispratik since my image size is 1820x940
and most of annotated object fall under large
[Car, Bus, Van, Truck, Pedestrian]
so i can simply removed small and medium
from areaRng
and check only mAP for large
. Is my thought is right?
The thought is right, but if some sizes of objects are not training well you wouldn't know if you don't have metrics breakdown by the different sizes.
That's right, this requires modifying pycocotools. So you can modify your own pycocotools repo to fit your use cases.
@tispratik, thanks for sharing this. Currently, I'm training an object detection model, I'm just wondering how to visualise the mAP value during training and evaluation. Now, I see only the losses for training in the tensorboard but not the mAP value. During evaluation I can see a single pointfor mAP all, small, medium, large. how to visualise the mAP value similar to your image which you have posted above. Could you please provide any article to replicate the same? It would be really helpful.
Hi @IamExperimenting, it could mean that none of your bboxes meet the size criteria for mAP small/medium/large... check your bbox sizes.
We're training our object detection based model on 1280x720 images which are being resized to 300x300. Most of our bounding boxes have an area greater than 96^96 which is the maximum size for medium boxes. So almost all of our boxes lie in the large bucket.
Is there a way I can set custom sizes for these definitions so that we can have more useful mAP, precision, recall values on tensorboard?
Currently, we get no values for small and medium sized boxes.