open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.21k stars 9.4k forks source link

Need help with coco metrics. #11784

Closed Warcry25 closed 3 months ago

Warcry25 commented 3 months ago

i want these evaluation for my custom train model but im unsure how to make this appear. im using cocometric. can someone help me asap pls:

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.055 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.112 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.046 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.071 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.226 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.077 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.271 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.319 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.250 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.264 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.397

Warcry25 commented 3 months ago

is it possible to obtained the results after training completed?

MiXaiLL76 commented 3 months ago

There may be too many objects in your dataset. You can save the results and recalculate metrics in identical ways: https://nbviewer.org/github/MiXaiLL76/faster_coco_eval/blob/main/examples/eval_example.ipynb https://nbviewer.org/github/MiXaiLL76/faster_coco_eval/blob/main/examples/curve_example.ipynb

Warcry25 commented 3 months ago

I have figure out finally thank you anyways. Just added this into the config.py:

vis_backends = [ dict(type='LocalVisBackend'), dict(type='WandbVisBackend'), dict(type='TensorboardVisBackend')]

visualizer = dict( name='visualizer', type='DetLocalVisualizer', vis_backends=vis_backends)

val_evaluator = dict( metric='bbox', classwise=True, type='CocoMetric', format_only=False, backend_args=None, ann_file='data/coco/valid/_annotations.coco.json')