nutonomy / nuscenes-devkit

The devkit of the nuScenes dataset.
https://www.nuScenes.org
Other
2.18k stars 616 forks source link

Tracking benchmark test set error: Length of names must match number of levels in MultiIndex. #338

Closed gdlg closed 4 years ago

gdlg commented 4 years ago

Hello,

I have tried to upload my results on the tracking benchmark on EvalAI however I get an error in stdout and no stderr: “Length of names must match number of levels in MultiIndex.” On the other hand, it works on the validation set on my machine.

Starting Evaluation...
Submission related metadata:
Unpacking dataset...
Unpacking user submission...
Evaluating for test phase
======
Loading NuScenes tables for version v1.0-private-test...
23 category,
8 attribute,
4 visibility,
11997 instance,
12 sensor,
1800 calibrated_sensor,
462901 ego_pose,
15 log,
150 scene,
6008 sample,
462901 sample_data,
201130 sample_annotation,
4 map,
Done loading in 7.6 seconds.
======
Reverse indexing ...
Done reverse indexing in 1.9 seconds.
======
Initializing nuScenes tracking evaluation
Loaded results from /tmp/tmpaeaobejj/submission/data.json. Found detections for 6008 samples.
Loading annotations for test split from nuScenes version: v1.0-private-test
Loaded ground truth annotations for 6008 samples.
Filtering tracks
=> Original number of boxes: 342283
=> After distance based filtering: 332718
=> After LIDAR points based filtering: 332718
=> After bike rack filtering: 332701
Filtering ground truth tracks
=> Original number of boxes: 159753
=> After distance based filtering: 120995
=> After LIDAR points based filtering: 108973
=> After bike rack filtering: 108973
Accumulating metric data...
Computing metrics for class bicycle...

Length of names must match number of levels in MultiIndex.

I am not really sure if it’s a problem with my data and however it sounds related to #299.

Thank you very much.

holger-motional commented 4 years ago

Hi. Can you send me your version number as output by pip show motmetrics?

gdlg commented 4 years ago

The problem is encountered on EvalAI (https://evalai.cloudcv.org/web/challenges/challenge-page/476/overview), not on my own machine.

holger-motional commented 4 years ago

Thanks. The admins of EvalAI are looking into this. Seems like there system uses a wrong version of the motmetrics package.

holger-motional commented 4 years ago

Hi. After patching up the other challenges, the EvalAI admins will now look into the tracking challenge. We'll update you as soon as we know more.

gdlg commented 4 years ago

Hi, thank you very much for looking into this.

holger-motional commented 4 years ago

Hi @gdlg. The server issues have been resolved and all submissions re-evaluated. Please let us know if you have any questions.

gdlg commented 4 years ago

Hi @holger-nutonomy , Thank you very much :-) . However I noticed that the re-evaluation drained my number of allowed submissions “You have exhausted maximum submission limit!” because I uploaded quite a few failed tests while trying to fix the issue. Now that the failed submissions are passing, they have been deducted from my quota. Is it possible to fix that? Or is it the limit going to be reset after a month?

holger-motional commented 4 years ago

Unfortunately we don't have a way to reset this. As your previous submission was below the baselines, you are allowed to open a new account. Ideally use the same user name and a "2" at the end to make it clear.