Hello, I used coco dataset when I was using model fine-tuning. I modified metric=['bbox', 'segm'] in val_evaluator according to the mmengine user manual, and the metric of bbox was obtained during evaluation, but raise an error:
File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmengine/runner/runner.py", line 1777, in train
model = self.train_loop.run() # type: ignore
File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmengine/runner/loops.py", line 102, in run
self.runner.val_loop.run()
File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmengine/runner/loops.py", line 374, in run
metrics = self.evaluator.evaluate(len(self.dataloader.dataset))
File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmengine/evaluator/evaluator.py", line 79, in evaluate
_results = metric.evaluate(size)
File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmengine/evaluator/metric.py", line 133, in evaluate
_metrics = self.compute_metrics(results) # type: ignore
File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmdet/evaluation/metrics/coco_metric.py", line 446, in compute_metrics
raise KeyError(f'{metric} is not in results')
KeyError: 'segm is not in results'
how can I modify the code to show the recognition accuracy of each category in the evaluation? As the picture shows, thank you!
Hello, I used coco dataset when I was using model fine-tuning. I modified metric=['bbox', 'segm'] in val_evaluator according to the mmengine user manual, and the metric of bbox was obtained during evaluation, but raise an error: File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmengine/runner/runner.py", line 1777, in train model = self.train_loop.run() # type: ignore File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmengine/runner/loops.py", line 102, in run self.runner.val_loop.run() File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmengine/runner/loops.py", line 374, in run metrics = self.evaluator.evaluate(len(self.dataloader.dataset)) File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmengine/evaluator/evaluator.py", line 79, in evaluate _results = metric.evaluate(size) File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmengine/evaluator/metric.py", line 133, in evaluate _metrics = self.compute_metrics(results) # type: ignore File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmdet/evaluation/metrics/coco_metric.py", line 446, in compute_metrics raise KeyError(f'{metric} is not in results') KeyError: 'segm is not in results' how can I modify the code to show the recognition accuracy of each category in the evaluation? As the picture shows, thank you!