open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.01k stars 9.36k forks source link

The testing results of the whole dataset is empty. OrderedDict() when running test.py for convnext #8766

Open Neil-untitled opened 1 year ago

Neil-untitled commented 1 year ago

Dear Open-mmlab,

I had a problem while testing convnext on my GTX1660ti. I was running this in command python tools/test.py configs/convnext/cascade_mask_rcnn_convnext-s_p4_w7_fpn_giou_4conv1f_fp16_ms-crop_3x_coco.py checkpoints/cascade_mask_rcnn_convnext-s_p4_w7_fpn_giou_4conv1f_fp16_ms-crop_3x_coco_20220510_201004-3d24f5a4.pth --eval bbox

The output is like: load checkpoint from local path: checkpoints/cascade_mask_rcnn_convnext-s_p4_w7_fpn_giou_4conv1f_fp16_ms-crop_3x_coco_20220510_201004-3d24f5a4.pth [>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 5000/5000, 1.0 task/s, elapsed: 4996s, ETA: 0s Evaluating bbox... Loading and preparing results... The testing results of the whole dataset is empty. OrderedDict()

I ran several other models and they were OK but convnext does not work. Can somebody help me with this problem? Any information could help.

All the best Neil

ZwwWayne commented 1 year ago

Hi @Neil-untitled , Thanks for your report. @wanghonglie will take a look and check the issue.

wanghonglie commented 1 year ago

Hi @Neil-untitled , I used the same command and got the correct result, please make sure you have installed the latest version of mmdetection, mmclassification and mmcv on the master branch.

Neil-untitled commented 1 year ago

Thank you very much for reply!

That sounds bit weird if it is due to version lag because I just installed mmdetection last week following the instructions provided in the docs. I double-checked versions on my laptop. Here are the version infos: mmcv: 1.6.1 mmdet: 2.25.1 mmcls: 0.23.2

Best wishes Neil

wanghonglie commented 1 year ago

Thank you very much for reply!

That sounds bit weird if it is due to version lag because I just installed mmdetection last week following the instructions provided in the docs. I double-checked versions on my laptop. Here are the version infos: mmcv: 1.6.1 mmdet: 2.25.1 mmcls: 0.23.2

Best wishes Neil

The version is correct. Have you ever modified the code or configuration file? Or you can run demo/image_demo.py to check if the result is normal.

Neil-untitled commented 1 year ago

I did the image_demo.py and it can run smoothly without errors. The result pic is shown below image

command log like this: python demo/image_demo.py demo/demo.jpg configs/convnext/cascade_mask_rcnn_convnext-s_p4_w7_fpn_giou_4conv1f_fp16_ms-crop_3x_coco.py checkpoints/cascade_mask_rcnn_convnext-s_p4_w7_fpn_giou_4conv1f_fp16_ms-crop_3x_coco_20220510_201004-3d24f5a4.pth load checkpoint from local path: checkpoints/cascade_mask_rcnn_convnext-s_p4_w7_fpn_giou_4conv1f_fp16_ms-crop_3x_coco_20220510_201004-3d24f5a4.pth /home/neoxparker/mmdetection/mmdet/datasets/utils.py:70: UserWarning: "ImageToTensor" pipeline is replaced by "DefaultFormatBundle" for batch inference. It is recommended to manually replace it in the test data pipeline in your config file. 'data pipeline in your config file.', UserWarning)

It is also very interesting to know that the same python tools/test.py for convnext command runs successfully on Xavier NX but failed on the WSL(ubuntu 18.04) of my laptop. I guess I may need to reinstall everything to fix this problem(facepalm)

Neil-untitled commented 1 year ago

I deleted the conda environment and reinstalled everything again but it didn't seem to work. And the output is somehow split into three parts:

_python tools/test.py configs/convnext/cascade_mask_rcnn_convnext-s_p4_w7_fpn_giou_4conv1f_fp16_ms-crop_3x_coco.py checkpoints/cascade_mask_rcnn_convnext-s_p4_w7_fpn_giou_4conv1f_fp16_ms-crop_3x_coco_20220510201004-3d24f5a4.pth --eval bbox /home/neoxparker/mmdetection/mmdet/utils/setup_env.py:38: UserWarning: Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. warnings.warn( /home/neoxparker/mmdetection/mmdet/utils/setup_env.py:48: UserWarning: Setting MKL_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. warnings.warn( loading annotations into memory... Done (t=0.41s) creating index... index created! load checkpoint from local path: checkpoints/cascade_mask_rcnn_convnext-s_p4_w7_fpn_giou_4conv1f_fp16_ms-crop_3x_coco_20220510_201004-3d24f5a4.pth [>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 5000/5000, 0.5 task/s, elapsed: 10632s, ETA: 0s Evaluating bbox... Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... (mmlab) neoxparker@Haoran-Nexus:~/mmdetection$ DONE (t=6.42s). Accumulating evaluation results... DONE (t=1.48s).

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.005 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=1000 ] = 0.006 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=1000 ] = 0.005 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.001 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.008 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.003 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.003 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.003 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.000 (mmlab) neoxparker@Haoran-Nexus:~/mmdetection$ Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.007

OrderedDict([('bbox_mAP', 0.005), ('bbox_mAP_50', 0.006), ('bbox_mAP_75', 0.005), ('bbox_mAP_s', 0.0), ('bbox_mAP_m', 0.001), ('bbox_mAP_l', 0.008), ('bbox_mAP_copypaste', '0.005 0.006 0.005 0.000 0.001 0.008')])

wanghonglie commented 1 year ago

Are you sure you haven't made any changes to the code? Since the image_demo.py can run without errors. Or you can print the inputs of the coco evaluator and check whether it is normal.

I deleted the conda environment and reinstalled everything again but it didn't seem to work. And the output is somehow split into three parts:

_python tools/test.py configs/convnext/cascade_mask_rcnn_convnext-s_p4_w7_fpn_giou_4conv1f_fp16_ms-crop_3x_coco.py checkpoints/cascade_mask_rcnn_convnext-s_p4_w7_fpn_giou_4conv1f_fp16_ms-crop_3x_coco_20220510201004-3d24f5a4.pth --eval bbox /home/neoxparker/mmdetection/mmdet/utils/setup_env.py:38: UserWarning: Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. warnings.warn( /home/neoxparker/mmdetection/mmdet/utils/setup_env.py:48: UserWarning: Setting MKL_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. warnings.warn( loading annotations into memory... Done (t=0.41s) creating index... index created! load checkpoint from local path: checkpoints/cascade_mask_rcnn_convnext-s_p4_w7_fpn_giou_4conv1f_fp16_ms-crop_3x_coco_20220510_201004-3d24f5a4.pth [>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 5000/5000, 0.5 task/s, elapsed: 10632s, ETA: 0s Evaluating bbox... Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... (mmlab) neoxparker@Haoran-Nexus:~/mmdetection$ DONE (t=6.42s). Accumulating evaluation results... DONE (t=1.48s).

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.005 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=1000 ] = 0.006 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=1000 ] = 0.005 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.001 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.008 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.003 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.003 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.003 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.000 (mmlab) neoxparker@Haoran-Nexus:~/mmdetection$ Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.007

OrderedDict([('bbox_mAP', 0.005), ('bbox_mAP_50', 0.006), ('bbox_mAP_75', 0.005), ('bbox_mAP_s', 0.0), ('bbox_mAP_m', 0.001), ('bbox_mAP_l', 0.008), ('bbox_mAP_copypaste', '0.005 0.006 0.005 0.000 0.001 0.008')])

Are you sure you haven't made any changes to the code? Since the image_demo.py can run without errors. Or you can print the inputs of the coco evaluator and check whether it is normal.

Neil-untitled commented 1 year ago

Dear @wanghonglie

Bro, I am hundred percent sure I did not change anything (facepalm). Let me clarify the situation:

  1. test.py runs successfully on other models like ssd, faster rcnn and I can get normal results and mAP.
  2. test.py just does not work for convnext for some reason I don't know. I tried to put in option --show-dir and from the result images I can see there are no detected objects, almost all imgs are the same as original. This should explain why mAP is like 0.0005, which I posted just a few hours ago. I guess the detector convnext is not even working properly.
  3. I guess the problem could be pytorch version because I installed torch using conda and the version is 1.12 which may not be compatible with mmcv or mmdet.( I validated this today with different versions of torch. they all failed to do test.py with convnext.
  4. I tried image_demo.py with convnext and it worked pretty fine. But test.py with convnext does not work at all.(Cannot detect anything from coco val dataset.

Best wishes Neil

Neil-untitled commented 1 year ago

Hi @wanghonglie , may I ask which cuda version you are using for mmcv?

wanghonglie commented 1 year ago

Hi @wanghonglie , may I ask which cuda version you are using for mmcv? Hi @Neil-untitled, My cuda version is 11.3 nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Mon_May__3_19:15:13_PDT_2021 Cuda compilation tools, release 11.3, V11.3.109 Build cuda_11.3.r11.3/compiler.29920130_0

wanghonglie commented 1 year ago

Hi, @Neil-untitled. Maybe you can try pytorch 1.0 version.

You can also try to initialize the same input to see if there is a difference between the output of test.py and image_demo.py, and if so, try to locate the difference.

shanglala commented 1 year ago

Hi.Have you solved your problem? I have the same problem.If you solved it, can you tell me how you solved it