I have a dataset that has huge discrepancies in number of elements in different categories. Sometimes in evaluation set there are no representatives of some classes.
What I found is that function evaluate in coco_metric.py loops only over exisiting classes and as a result mAP values are not assigned to proper classes so some very similar classes have very different mAP scores.
Am I doing something wrong or is it a bug?
I have a dataset that has huge discrepancies in number of elements in different categories. Sometimes in evaluation set there are no representatives of some classes. What I found is that function evaluate in coco_metric.py loops only over exisiting classes and as a result mAP values are not assigned to proper classes so some very similar classes have very different mAP scores. Am I doing something wrong or is it a bug?