eddyhkchiu / mahalanobis_3d_multi_object_tracking

[NeurIPS Workshop 2019] Official code of the paper "Probabilistic 3D Multi-Object Tracking for Autonomous Driving." First Place of the First NuScenes Tracking Challenge in the AI Driving Olympics Workshop of NeurIPS.
Other
388 stars 81 forks source link

Evaluation problem: cannot find metric pred_frequencies #7

Closed sergiev closed 4 years ago

sergiev commented 4 years ago

Hello everyone out there! Hope you're OK.

So, I've run these two scripts as shown in README:

python main.py val 2 m 11 greedy true nuscenes results/000008;
python evaluate_nuscenes.py --output_dir results/000008 results/000008/val/results_val_probabilistic_tracking.json > results/000008/output.txt

Output of the first one is alright:

track nuscenes
======
Loading NuScenes tables for version v1.0-trainval...
23 category,
8 attribute,
4 visibility,
64386 instance,
12 sensor,
10200 calibrated_sensor,
2631083 ego_pose,
68 log,
850 scene,
34149 sample,
2631083 sample_data,
1166187 sample_annotation,
4 map,
Done loading in 22.8 seconds.
======
Reverse indexing ...
Done reverse indexing in 6.3 seconds.
======
meta:  {'use_camera': False, 'use_lidar': True, 'use_radar': False, 'use_map': False, 'use_external': False}
Loaded results from /media/sergiev/semiex/megvii/megvii_val.json. Found detections for 6019 samples.
100%|██████████████████████████████████████████████████████| 6019/6019 [06:10<00:00, 16.26it/s]
Total Tracking took: 359.712 for 6019 frames or 16.7 FPS

But the evaluation stage fails at the first step:

python evaluate_nuscenes.py --verbose 1 --output_dir results/000008 results/000008/val/results_val_probabilistic_tracking.json > results/000008/output.txt
100%|██████████████████████████████████████████████████████████| 6019/6019 [00:05<00:00, 1027.84it/s]
Traceback (most recent call last):                                                                   
  File "evaluate_nuscenes.py", line 57, in <module>
    nusc_eval.main(render_curves=render_curves_)
  File "/home/sergiev/misc/tracking/nuscenes-devkit/python-sdk/nuscenes/eval/tracking/evaluate.py", line 206, in main
    metrics, metric_data_list = self.evaluate()
  File "/home/sergiev/misc/tracking/nuscenes-devkit/python-sdk/nuscenes/eval/tracking/evaluate.py", line 136, in evaluate
    accumulate_class(class_name)
  File "/home/sergiev/misc/tracking/nuscenes-devkit/python-sdk/nuscenes/eval/tracking/evaluate.py", line 131, in accumulate_class
    curr_md = curr_ev.accumulate()
  File "/home/sergiev/misc/tracking/nuscenes-devkit/python-sdk/nuscenes/eval/tracking/algo.py", line 140, in accumulate
    thresh_summary = mh.compute(acc, metrics=MOT_METRIC_MAP.keys(), name=thresh_name)
  File "/home/sergiev/anaconda3/envs/probabilistic_tracking/lib/python3.6/site-packages/motmetrics/metrics.py", line 185, in compute
    cache[mname] = self._compute(df_map, mname, cache, options, parent='summarize')
  File "/home/sergiev/anaconda3/envs/probabilistic_tracking/lib/python3.6/site-packages/motmetrics/metrics.py", line 311, in _compute
    v = cache[depname] = self._compute(df_map, depname, cache, options, parent=name)
  File "/home/sergiev/anaconda3/envs/probabilistic_tracking/lib/python3.6/site-packages/motmetrics/metrics.py", line 302, in _compute
    assert name in self.metrics, 'Cannot find metric {} required by {}.'.format(name, parent)
AssertionError: Cannot find metric pred_frequencies required by num_predictions.

Could you please explain or share your assumptions of what could be a reason? Thanks in advance.

sergiev commented 4 years ago

Simple solution: pull all your repos everyday. Motmetrics version was not specified in nuscenes-devkit repo version I've downloaded couple months ago.

eddyhkchiu commented 4 years ago

Thanks for the investigation and the solution! It is related to this issue in nuscenes-devkit https://github.com/nutonomy/nuscenes-devkit/pull/300