open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
28.47k stars 9.28k forks source link

CrowdHumanMetric buggy VOC matching #11813

Open mdaehl opened 5 days ago

mdaehl commented 5 days ago

Describe the bug When using the "CrowdHumanMetric" with the compare_matching_method="VOC" the code crashes (see below). This is due to the fact that the matching assumes, that the dtboxes have the attribute "score" and the gtboxes the attribute "ign" (see line 740 & 741 in crowdhuman_metric.py). Furthermore it is assumed that an invididual dt has the function "iou" (line 749). However both the gtboxes and dtboxes are simply numpy arrays (coming from the "load_det_boxes" and "load_gt_boxes" functions), which do not store these attributes nor have an "iou" function.

Reproduction This error is indepent of the specific model, so any (object detection) model encounters this issue and allows reproduction.

Error traceback

Traceback (most recent call last):
  File "/lhome/mdaehli/mm/mmdetection/tools/train.py", line 121, in <module>
    main()
  File "/lhome/mdaehli/mm/mmdetection/tools/train.py", line 117, in main
    runner.train()
  File "/lhome/mdaehli/miniconda3/envs/openmmlab/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1777, in train
    model = self.train_loop.run()  # type: ignore
  File "/lhome/mdaehli/miniconda3/envs/openmmlab/lib/python3.8/site-packages/mmengine/runner/loops.py", line 102, in run
    self.runner.val_loop.run()
  File "/lhome/mdaehli/miniconda3/envs/openmmlab/lib/python3.8/site-packages/mmengine/runner/loops.py", line 374, in run
    metrics = self.evaluator.evaluate(len(self.dataloader.dataset))
  File "/lhome/mdaehli/miniconda3/envs/openmmlab/lib/python3.8/site-packages/mmengine/evaluator/evaluator.py", line 79, in evaluate
    _results = metric.evaluate(size)
  File "/lhome/mdaehli/miniconda3/envs/openmmlab/lib/python3.8/site-packages/mmengine/evaluator/metric.py", line 133, in evaluate
    _metrics = self.compute_metrics(results)  # type: ignore
  File "/lhome/mdaehli/mm/mmdetection/mmdet/evaluation/metrics/crowdhuman_metric.py", line 200, in compute_metrics
    score_list = self.compare(eval_samples)
  File "/lhome/mdaehli/mm/mmdetection/mmdet/evaluation/metrics/crowdhuman_metric.py", line 262, in compare
    result = samples[id].compare_voc(self.iou_thres)
  File "/lhome/mdaehli/mm/mmdetection/mmdet/evaluation/metrics/crowdhuman_metric.py", line 742, in compare_voc
    dtboxes.sort(key=lambda x: x.score, reverse=True)
TypeError: sort() got an unexpected keyword argument 'key'

Bug fix To fix this either the gtboxes and dtboxes would need to be converted into a new class that has the required attributes and function or the calculation itself would have to be rewritten, which is probably requires more effort.