nutonomy / nuscenes-devkit

The devkit of the nuScenes dataset.
https://www.nuScenes.org
Other
2.24k stars 617 forks source link

Different iteration over class names for TP metrics vs. AP computation #904

Closed feliyur closed 1 year ago

feliyur commented 1 year ago

In class DetectionMetrics (eval/detection/data_classes.py), mean_dist_aps property iterates over self._label_aps.items() to compute AP (therefore averaging only over valid values): https://github.com/nutonomy/nuscenes-devkit/blob/9bc2f9e74d8a3dd27ef6ce83336508dd8926f867/python-sdk/nuscenes/eval/detection/data_classes.py#L223 while tp_errors iterates over all class names from the config, therefore potentially loading and averaging over default 0 values from self._label_tp_errors - which happens if no metrics were added for a configured class name and will then result in incorrect values: https://github.com/nutonomy/nuscenes-devkit/blob/9bc2f9e74d8a3dd27ef6ce83336508dd8926f867/python-sdk/nuscenes/eval/detection/data_classes.py#L236-L237 Is there a reason for this difference? This can be fixed by correcting the configuration, but the default behavior potentially silently computes wrong metrics because of defaultdict usage. (This is for a dataset different from nuScenes). Why not iterate over self._label_tp_errors.keys()?

    for detection_name in self._label_tp_errors.keys():
        class_errors.append(self.get_label_tp(detection_name, metric_name))
whyekit-motional commented 1 year ago

@feliyur your explanation seems to make sense - feel free to open a PR, and we can take a closer look at this