hding2455 / DSC

released code for CVPR2021: Deeply Shape-guided Cascade for Instance Segmentation
Other
15 stars 0 forks source link

evaluating model on Cityscapes #14

Open iuliaalda opened 2 years ago

iuliaalda commented 2 years ago

Hello, I started training the model using the Cityscapes dataset (I have also converted the annotations to COCO style) and I wanted to run the test.py file. I have used both cityscapes and bbox & segm as eval args, both together and separately, but I keep obtaining the following results: With cityscapes arg:

############################### what : AP AP_50% ############################### person : nan nan rider : nan nan car : nan nan truck : nan nan bus : nan nan train : nan nan motorcycle : nan nan bicycle : nan nan average : nan nan

With bbox (and segm) args:

Evaluating bbox... Loading and preparing results... DONE (t=1.14s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=2.55s). Accumulating evaluation results... DONE (t=0.68s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000

However, while training, at the end of every epoch, after performing the eval, I get normal looking results, eg. 0.390 AP. This is the command that I run: python tools/test.py configs/dsc/dsc_r50_fpn_1x_cityscapes.py work_dirs/dsc_r50_fpn_1x_cityscapes/epoch_8.pth --out results_city_ep8.pkl --eval cityscapes bbox segm Are there other arguments that I should add to the command? Thanks!

hding2455 commented 2 years ago

You can trust the result while running the training, let me check what's wrong with the test scripts.

hding2455 commented 2 years ago

Oh I see, you use --eval cityscapes bbox segm, I guess you don't need to add cityscapes here, just use bbox and segm and see whether this can fix your issue

iuliaalda commented 2 years ago

I ran the script with just bbox and segm but all the values are -1, as above.

hding2455 commented 2 years ago

Have you tried that on COCO? Does the same thing happen? I actually met the -1s when I ran it on the coco test set where no annotation was given.

iuliaalda commented 2 years ago

Yes, I did try it on COCO and I didn't encounter this issue

hding2455 commented 2 years ago

And you did get the result when run train.py at the same test dataset?

iuliaalda commented 2 years ago

Yes, it works when training. For example, these are the results I got after epoch 10 finished:

[>>] 500/500, 0.6 task/s, elapsed: 776s, ETA: 0sloading annotations into memory... Done (t=0.07s) creating index... index created! loading annotations into memory... Done (t=0.26s) creating index... index created! 2022-02-21 03:38:06,037 - mmdet - INFO - Evaluating bbox... Loading and preparing results... DONE (t=0.36s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=7.89s). Accumulating evaluation results... DONE (t=0.61s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.455 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.694 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.476 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.219 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.463 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.644 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.289 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.520 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.560 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.309 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.578 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.746 2022-02-21 03:38:14,935 - mmdet - INFO - Evaluating segm... Loading and preparing results... DONE (t=0.49s) creating index... index created! Running per image evaluation... Evaluate annotation type segm DONE (t=8.30s). Accumulating evaluation results... DONE (t=0.61s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.399 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.665 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.397 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.156 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.374 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.595 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.274 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.464 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.493 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.222 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.480 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.705 2022-02-21 03:38:24,477 - mmdet - INFO - Epoch [10][23720/23720] lr: 1.500e-03, bbox_mAP: 0.4550, bbox_mAP_50: 0.6940, bbox_mAP_75: 0.4760, bbox_mAP_s: 0.2190, bbox_mAP_m: 0.4630, bbox_mAP_l: 0.6440, bbox_mAP_copypaste: 0.455 0.694 0.476 0.219 0.463 0.644, segm_mAP: 0.3990, segm_mAP_50: 0.6650, segm_mAP_75: 0.3970, segm_mAP_s: 0.1560, segm_mAP_m: 0.3740, segm_mAP_l: 0.5950, segm_mAP_copypaste: 0.399 0.665 0.397 0.156 0.374 0.595

And this is the command, if it helps: python tools/train.py configs/dsc/dsc_r50_fpn_1x_cityscapes.py --resume-from /content/drive/MyDrive/DSC/work_dirs/dsc_r50_fpn_1x_cityscapes/epoch_9.pth

hding2455 commented 2 years ago

That is so strange, I don't really know what's wrong. There is another way to check. Use your dataset and run it on whatever (maybe mask rcnn) in the latest mmdet repo and see whether they generate the same issue here. Because basically the test train and test code are from some version of mmdet.