Closed DaHaiHuha closed 4 years ago
Metric in the repo calculates only the subset of the Kaggle metric.
At Kaggle you have: [1] Average over classes [2] Average over IOU thresholds [0.5: 0.95: 0.05], similar to COCO metric.
But the part about mAP per class per IOU should be the same.
Hi, I'd love to know whether the mAP 3D calculation file is consistent with the evaluation mechanism in Kaggle's competition. As far as I know, all the categories are counted but I don't know the distance range for evaluation. Does anybody have some ideas? Thanks!