open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.6k stars 9.46k forks source link

Problems with cityscapes evaluation #4037

Open zzq96 opened 4 years ago

zzq96 commented 4 years ago

Hello I have a problem with cityscapes evaluation I have downloaded cityscapes dataset and prepare it with this:

pip install cityscapesscripts python tools/convert_datasets/cityscapes.py ./data/cityscapes --nproc 8 --out_dir ./data/cityscapes/annotations

And pretrained model of faster-rcnn from here Then I changed test config in cityscapes_detection.py to val config, like this:

test=dict(
    type=dataset_type,
    ann_file=data_root +
    'annotations/instancesonly_filtered_gtFine_val.json',
    img_prefix=data_root + 'leftImg8bit/val/',
    pipeline=test_pipeline))

And when i trying to evaluate with this command: python tools/test.py configs/cityscapes/faster_rcnn_r50_fpn_1x_cityscapes.py checkpoints/faster_rcnn_r50_fpn_1x_cityscapes_20200502-829424c0.pth --eval bbox I get this results:

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.403 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=1000 ] = 0.653 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=1000 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.172 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.409 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.614 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.462 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.462 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.462 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.209 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.465 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.692 the mAP_75 equals -1. What should i do to fix it?

ZwwWayne commented 4 years ago

This issue is not related to the Cityscapes dataset but to the numpy version. You can upgrade your numpy version and try again.

zzq96 commented 4 years ago

oh, thank you !

zzq96 commented 4 years ago

hi, I have tried to upgrade my numpy and it is 1.19.2 now, it's the latest version. But the mAP_75 still equals -1. But, for coco Dataset evaluation, the mAP_75 is fine. any suggestion?

ZwwWayne commented 4 years ago

@xvjiarui please have a look at that.

xvjiarui commented 3 years ago

Hi @zzq96 I am not sure the reason of it. Did you run COCO and cityscapes in the same environment? You may also try the latest mmpycocotools.

BestVimmerJP commented 3 years ago

Did you solve this problem? I have the exact same problem on the provided Docker environment.

ruiningTang commented 3 years ago

Hi, I guess this problem may come from dataset_type = 'CityscapesDataset' in configs/_base_/datasets/cityscapes_detection.py. Just using dataset_type = 'CocoDataset' will solve this issue. The overall cityscapes_detection.py is presented as follows:

# dataset_type = 'CityscapesDataset'
dataset_type = 'CocoDataset'
data_root = './data/cityscapes/'
CLASSES = ('person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle',
            'bicycle')

img_norm_cfg = dict(
    mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(type='LoadAnnotations', with_bbox=True),
    dict(
        type='Resize', img_scale=[(2048, 800), (2048, 1024)], keep_ratio=True),
    dict(type='RandomFlip', flip_ratio=0.5),
    dict(type='Normalize', **img_norm_cfg),
    dict(type='Pad', size_divisor=32),
    dict(type='DefaultFormatBundle'),
    dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
]
test_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(
        type='MultiScaleFlipAug',
        img_scale=(2048, 1024),
        flip=False,
        transforms=[
            dict(type='Resize', keep_ratio=True),
            dict(type='RandomFlip'),
            dict(type='Normalize', **img_norm_cfg),
            dict(type='Pad', size_divisor=32),
            dict(type='ImageToTensor', keys=['img']),
            dict(type='Collect', keys=['img']),
        ])
]
data = dict(
    samples_per_gpu=1,
    workers_per_gpu=2,
    train=dict(
        type='RepeatDataset',
        times=8,
        dataset=dict(
            type=dataset_type,
            ann_file=data_root +
            'annotations/instancesonly_filtered_gtFine_train.json',
            img_prefix=data_root + 'leftImg8bit/train/',
            pipeline=train_pipeline,
            classes=CLASSES)),
    val=dict(
        type=dataset_type,
        ann_file=data_root +
        'annotations/instancesonly_filtered_gtFine_val.json',
        img_prefix=data_root + 'leftImg8bit/val/',
        pipeline=test_pipeline,
        classes=CLASSES),
    test=dict(
        type=dataset_type,
        ann_file=data_root +
        'annotations/instancesonly_filtered_gtFine_val.json',
        img_prefix=data_root + 'leftImg8bit/val/',
        pipeline=test_pipeline,
        classes=CLASSES))
evaluation = dict(interval=1, metric='bbox')
abred commented 2 years ago

Your output looks like the coco output if I am not mistaken, there I had a similar issue, for me it was this line: https://github.com/cocodataset/cocoapi/blob/8c9bcc3cf640524c4c20a9c40e89cb6a2f2fa0e9/PythonAPI/pycocotools/cocoeval.py#L442 I am not sure why but sometimes there are some floating point issues. Changing it to np.isclose(iouThr, p.iouThrs) helped.

kv1830 commented 1 year ago

Your output looks like the coco output if I am not mistaken, there I had a similar issue, for me it was this line: https://github.com/cocodataset/cocoapi/blob/8c9bcc3cf640524c4c20a9c40e89cb6a2f2fa0e9/PythonAPI/pycocotools/cocoeval.py#L442 I am not sure why but sometimes there are some floating point issues. Changing it to np.isclose(iouThr, p.iouThrs) helped.

It really helped!