lyft / nuscenes-devkit

Devkit for the public 2019 Lyft Level 5 AV Dataset (fork of https://github.com/nutonomy/nuscenes-devkit)
Other
365 stars 103 forks source link

evaluate range #58

Closed DaHaiHuha closed 4 years ago

DaHaiHuha commented 4 years ago

Hi, I'd love to know whether the mAP 3D calculation file is consistent with the evaluation mechanism in Kaggle's competition. As far as I know, all the categories are counted but I don't know the distance range for evaluation. Does anybody have some ideas? Thanks!

ternaus commented 4 years ago

Metric in the repo calculates only the subset of the Kaggle metric.

At Kaggle you have: [1] Average over classes [2] Average over IOU thresholds [0.5: 0.95: 0.05], similar to COCO metric.

But the part about mAP per class per IOU should be the same.