nutonomy / nuscenes-devkit

The devkit of the nuScenes dataset.
https://www.nuScenes.org
Other
2.31k stars 632 forks source link

Evaluation in KITTI format #551

Closed deepmeng closed 3 years ago

deepmeng commented 3 years ago

I converted the Nuscenes annotations to the KITTI label format by 'export_kitti.py'. To test the KITTI eval kit and the 'new KITTI labels', when I evaluated the ground truth with the ground truth by KITTI eval kit, the following result was returned. Do you have any ideas?

car_detection AP: 0.027176 0.025802 0.025802
pedestrian_detection AP: 0.022784 0.022150 0.022150
bicycle_detection AP: 0.000000 0.000000 0.000000
car_detection_ground AP: 0.024155 0.022471 0.022471
pedestrian_detection_ground AP: 0.000000 0.000000 0.000000
bicycle_detection_ground AP: 0.000000 0.000000 0.000000
Eval 3D bounding boxes
car_detection_3d AP: 0.006540 0.005825 0.005825
pedestrian_detection_3d AP: 0.000000 0.000000 0.000000
bicycle_detection_3d AP: 0.000000 0.000000 0.000000
deepmeng commented 3 years ago

solved

deeptibhegde commented 3 years ago

Hi, how did you solve this?

ZhangYu1ing commented 3 years ago

Hi, may I ask how did you solve this? Where should I check? Thanks~

holger-motional commented 3 years ago

It does not seem like this is an issue with nuScenes, but rather with the user's code.