open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.71k stars 9.48k forks source link

analyze_results.py doesn't accept format of test.py's results.pkl #8231

Open Penaplion opened 2 years ago

Penaplion commented 2 years ago

hello, i try to figure out what am i doing wrong. I would like to analyse some mask rcnn model predictions created by 'test.py' via 'analyse_results.py' script but i'm getting an error.

call of test.py: !python tools/test.py configs/container/r50_x4split_fold1.py work_dirs/r50_x4split_fold1/latest.pth --out work_dirs/r50_x4split_fold1/results.pkl --eval bbox segm output:

/content/mmdetection/mmdet/utils/setup_env.py:39: UserWarning: Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. f'Setting OMP_NUM_THREADS environment variable for each process ' /content/mmdetection/mmdet/utils/setup_env.py:49: UserWarning: Setting MKL_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. f'Setting MKL_NUM_THREADS environment variable for each process ' loading annotations into memory... Done (t=1.58s) creating index... index created! load checkpoint from local path: work_dirs/r50_x4split_fold1/latest.pth [>>] 100/100, 1.8 task/s, elapsed: 56s, ETA: 0s writing results to work_dirs/r50_x4split_fold1/results.pkl Evaluating bbox... Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=0.30s). Accumulating evaluation results... DONE (t=0.03s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.198 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=1000 ] = 0.348 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=1000 ] = 0.198 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.053 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.197 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.287 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.316 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.316 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.316 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.065 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.265 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.437 Evaluating segm... /content/mmdetection/mmdet/datasets/coco.py:474: UserWarning: The key "bbox" is deleted for more accurate mask AP of small/medium/large instances since v2.12.0. This does not change the overall mAP calculation. UserWarning) Loading and preparing results... DONE (t=0.03s) creating index... index created! Running per image evaluation... Evaluate annotation type segm DONE (t=0.41s). Accumulating evaluation results... /usr/local/lib/python3.7/dist-packages/pycocotools/cocoeval.py:378: DeprecationWarning: np.float is a deprecated alias for the builtin float. To silence this warning, use float by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use np.float64 here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations tp_sum = np.cumsum(tps, axis=1).astype(dtype=np.float) DONE (t=0.03s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.191 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=1000 ] = 0.316 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=1000 ] = 0.190 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.032 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.201 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.273 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.312 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.312 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.312 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.061 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.278 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.420 OrderedDict([('bbox_mAP', 0.198), ('bbox_mAP_50', 0.348), ('bbox_mAP_75', 0.198), ('bbox_mAP_s', 0.053), ('bbox_mAP_m', 0.197), ('bbox_mAP_l', 0.287), ('bbox_mAP_copypaste', '0.198 0.348 0.198 0.053 0.197 0.287'), ('segm_mAP', 0.191), ('segm_mAP_50', 0.316), ('segm_mAP_75', 0.19), ('segm_mAP_s', 0.032), ('segm_mAP_m', 0.201), ('segm_mAP_l', 0.273), ('segm_mAP_copypaste', '0.191 0.316 0.190 0.032 0.201 0.273')])

call of analyze_results.py: !python tools/analysis_tools/analyze_results.py /content/mmdetection/configs/container/r50_x4split_fold1.py work_dirs/r50_x4split_fold1/results.pkl /content/mmdetection/work_dirs/r50_x4split_fold1 output:

loading annotations into memory... Done (t=0.01s) creating index... index created! Traceback (most recent call last): File "tools/analysis_tools/analyze_results.py", line 365, in main() File "tools/analysis_tools/analyze_results.py", line 361, in main dataset, outputs, topk=args.topk, show_dir=args.show_dir) File "tools/analysis_tools/analyze_results.py", line 171, in evaluate_and_show raise 'The format of result is not supported yet. ' \ TypeError: exceptions must derive from BaseException

further information: i'm running mmdetection in google colab on a custom dataset in coco format. The config used for training is as follows: image image image image image

Scripts like 'analyze_logs.py' or 'eval_metric.py' work fine. The 'confusion_matrix.py' is also working but gives me a strange output (maybe its because i only have one class to detect?). Shouldn't there be any axis labeling? image

I would be very grateful for any tips. I need to inspect/visualize the model predictions.

earthlovebpt commented 2 years ago

I have 1 class and found this error too.

zhichengf commented 2 years ago

I also found this error

hhaAndroid commented 2 years ago

@Penaplion Hi, We have updated the dev-3.x branch, can you please give it a try

enemni commented 2 years ago

When trying it returns the error:

File "dev-3.x/mmdetection/tools/analysis_tools/analyze_results.py", line 7, in <module> from mmengine.config import Config, DictAction ModuleNotFoundError: No module named 'mmengine'

Using the python script in main I get this error: IFile "mmdetection/tools/analysis_tools/analyze_results.py", line 368, in <module> main() File "mmdetection/tools/analysis_tools/analyze_results.py", line 363, in main result_visualizer.evaluate_and_show( File "mmdetection/tools/analysis_tools/analyze_results.py", line 172, in evaluate_and_show good_samples, bad_samples = self.detection_evaluate( File "mmdetection/tools/analysis_tools/analyze_results.py", line 213, in detection_evaluate data_info = dataset.prepare_train_img(i) File "/mmdetection/mmdet/datasets/custom.py", line 243, in prepare_train_img return self.pipeline(results) File "/mmdetection/mmdet/datasets/pipelines/compose.py", line 41, in __call__ data = t(data) File "/mmdetection/mmdet/datasets/pipelines/loading.py", line 400, in __call__ results = self._load_semantic_seg(results) File "/mmdetection/mmdet/datasets/pipelines/loading.py", line 375, in _load_semantic_seg results['gt_semantic_seg'] = mmcv.imfrombytes( IndexError: too many indices for array: array is 2-dimensional, but 3 were indexed

r3yohei commented 1 year ago

I also still got similar error in the version 3.x. When I perform 2-class object detection task, a resulting confusion matrix is like below. However inferenced images look like perfect. image

When I tried 30-class object detection, this problem doesnt occur. Is there any workaround to make correct confusion matrix for few class object detection?

Holsonn commented 10 months ago

@Penaplion Hi, We have updated the dev-3.x branch, can you please give it a try

I try the confusion_matrix.py at dev-3.x branch but it still get the same problem.