open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.14k stars 9.39k forks source link

ValueError: need at least one array to concatenate #210

Closed pandigreat closed 5 years ago

pandigreat commented 5 years ago

I have this issue on np.concatenate(indices). I used my dataset with coco format

python tools/train.py myconfigs/faster_rcnn_r50_fpn_1x.py --work_dir work_dir 2018-12-27 20:50:21,367 - INFO - Distributed training: False 2018-12-27 20:50:21,639 - INFO - load model from: modelzoo://resnet50 2018-12-27 20:50:21,782 - WARNING - unexpected key in source state_dict: fc.weight, fc.bias

missing keys in source state_dict: layer3.0.bn1.num_batches_tracked, layer2.1.bn1.num_batches_tracked, layer4.1.bn3.num_batches_tracked, layer2.2.bn2.num_batches_tracked, layer1.2.bn3.num_batches_tracked, layer3.2.bn3.num_batches_tracked, layer4.0.bn2.num_batches_tracked, layer4.2.bn2.num_batches_tracked, layer3.1.bn2.num_batches_tracked, layer1.2.bn2.num_batches_tracked, layer1.0.downsample.1.num_batches_tracked, layer3.2.bn2.num_batches_tracked, layer2.3.bn2.num_batches_tracked, layer4.0.downsample.1.num_batches_tracked, layer3.2.bn1.num_batches_tracked, layer1.0.bn2.num_batches_tracked, layer2.1.bn3.num_batches_tracked, layer2.2.bn3.num_batches_tracked, layer3.0.downsample.1.num_batches_tracked, layer3.1.bn1.num_batches_tracked, layer4.2.bn3.num_batches_tracked, layer1.0.bn1.num_batches_tracked, layer2.3.bn3.num_batches_tracked, layer1.1.bn2.num_batches_tracked, layer2.0.bn1.num_batches_tracked, bn1.num_batches_tracked, layer3.4.bn1.num_batches_tracked, layer3.5.bn1.num_batches_tracked, layer2.0.downsample.1.num_batches_tracked, layer3.5.bn2.num_batches_tracked, layer1.0.bn3.num_batches_tracked, layer4.0.bn1.num_batches_tracked, layer1.2.bn1.num_batches_tracked, layer3.5.bn3.num_batches_tracked, layer3.4.bn3.num_batches_tracked, layer2.0.bn3.num_batches_tracked, layer3.0.bn2.num_batches_tracked, layer3.0.bn3.num_batches_tracked, layer3.3.bn1.num_batches_tracked, layer2.3.bn1.num_batches_tracked, layer3.4.bn2.num_batches_tracked, layer1.1.bn3.num_batches_tracked, layer4.1.bn1.num_batches_tracked, layer3.3.bn3.num_batches_tracked, layer2.0.bn2.num_batches_tracked, layer2.2.bn1.num_batches_tracked, layer4.2.bn1.num_batches_tracked, layer4.1.bn2.num_batches_tracked, layer1.1.bn1.num_batches_tracked, layer3.1.bn3.num_batches_tracked, layer4.0.bn3.num_batches_tracked, layer3.3.bn2.num_batches_tracked, layer2.1.bn2.num_batches_tracked

loading annotations into memory... Done (t=0.00s) creating index... index created! 2018-12-27 20:50:23,237 - INFO - Start running, host: ed@PKU, work_dir: /data/code/mmdetection/work_dir 2018-12-27 20:50:23,237 - INFO - workflow: [('train', 1)], max: 12 epochs Traceback (most recent call last): File "tools/train.py", line 88, in main() File "tools/train.py", line 84, in main logger=logger) File "/home/ed/anaconda2/envs/python36/lib/python3.6/site-packages/mmdet-0.5.5+c5d8f00-py3.6.egg/mmdet/apis/train.py", line 59, in train_detector _non_dist_train(model, dataset, cfg, validate=validate) File "/home/ed/anaconda2/envs/python36/lib/python3.6/site-packages/mmdet-0.5.5+c5d8f00-py3.6.egg/mmdet/apis/train.py", line 121, in _non_dist_train runner.run(data_loaders, cfg.workflow, cfg.total_epochs) File "/home/ed/anaconda2/envs/python36/lib/python3.6/site-packages/mmcv/runner/runner.py", line 349, in run epoch_runner(data_loaders[i], **kwargs) File "/home/ed/anaconda2/envs/python36/lib/python3.6/site-packages/mmcv/runner/runner.py", line 251, in train for i, data_batch in enumerate(data_loader): File "/home/ed/anaconda2/envs/python36/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 501, in iter return _DataLoaderIter(self) File "/home/ed/anaconda2/envs/python36/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 297, in init self._put_indices() File "/home/ed/anaconda2/envs/python36/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 345, in _put_indices indices = next(self.sample_iter, None) File "/home/ed/anaconda2/envs/python36/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 138, in iter for idx in self.sampler: File "/home/ed/anaconda2/envs/python36/lib/python3.6/site-packages/mmdet-0.5.5+c5d8f00-py3.6.egg/mmdet/datasets/loader/sampler.py", line 36, in iter indices = np.concatenate(indices) ValueError: need at least one array to concatenate

hellock commented 5 years ago

Could you show your config file?

pandigreat commented 5 years ago

Here is my Config file

# model settings
model = dict(
    type='FasterRCNN',
    pretrained='modelzoo://resnet50',
    backbone=dict(
        type='ResNet',
        depth=50,
        num_stages=4,
        out_indices=(0, 1, 2, 3),
        frozen_stages=1,
        style='pytorch'),
    neck=dict(
        type='FPN',
        in_channels=[256, 512, 1024, 2048],
        out_channels=256,
        num_outs=5),
    rpn_head=dict(
        type='RPNHead',
        in_channels=256,
        feat_channels=256,
        anchor_scales=[8],
        anchor_ratios=[0.5, 1.0, 2.0],
        anchor_strides=[4, 8, 16, 32, 64],
        target_means=[.0, .0, .0, .0],
        target_stds=[1.0, 1.0, 1.0, 1.0],
        use_sigmoid_cls=True),
    bbox_roi_extractor=dict(
        type='SingleRoIExtractor',
        roi_layer=dict(type='RoIAlign', out_size=7, sample_num=2),
        out_channels=256,
        featmap_strides=[4, 8, 16, 32]),
    bbox_head=dict(
        type='SharedFCBBoxHead',
        num_fcs=2,
        in_channels=256,
        fc_out_channels=1024,
        roi_feat_size=7,
        num_classes=81,
        target_means=[0., 0., 0., 0.],
        target_stds=[0.1, 0.1, 0.2, 0.2],
        reg_class_agnostic=False))
# model training and testing settings
train_cfg = dict(
    rpn=dict(
        assigner=dict(
            type='MaxIoUAssigner',
            pos_iou_thr=0.7,
            neg_iou_thr=0.3,
            min_pos_iou=0.3,
            ignore_iof_thr=-1),
        sampler=dict(
            type='RandomSampler',
            num=256,
            pos_fraction=0.5,
            neg_pos_ub=-1,
            add_gt_as_proposals=False),
        allowed_border=0,
        pos_weight=-1,
        smoothl1_beta=1 / 9.0,
        debug=False),
    rcnn=dict(
        assigner=dict(
            type='MaxIoUAssigner',
            pos_iou_thr=0.5,
            neg_iou_thr=0.5,
            min_pos_iou=0.5,
            ignore_iof_thr=-1),
        sampler=dict(
            type='RandomSampler',
            num=512,
            pos_fraction=0.25,
            neg_pos_ub=-1,
            add_gt_as_proposals=True),
        pos_weight=-1,
        debug=False))
test_cfg = dict(
    rpn=dict(
        nms_across_levels=False,
        nms_pre=2000,
        nms_post=2000,
        max_num=2000,
        nms_thr=0.7,
        min_bbox_size=0),
    rcnn=dict(
        score_thr=0.05, nms=dict(type='nms', iou_thr=0.5), max_per_img=100)
    # soft-nms is also supported for rcnn testing
    # e.g., nms=dict(type='soft_nms', iou_thr=0.5, min_score=0.05)
)
# dataset settings
dataset_type = 'CocoDataset'
data_root = 'data/coco/'
img_norm_cfg = dict(
    mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
data = dict(
    imgs_per_gpu=2,
    workers_per_gpu=2,
    train=dict(
        type=dataset_type,
        ann_file=data_root + 'annotations/train2017.json',
        img_prefix=data_root + 'train2017/',
        img_scale=(1333, 800),
        img_norm_cfg=img_norm_cfg,
        size_divisor=32,
        flip_ratio=0.5,
        with_mask=False,
        with_crowd=True,
        with_label=True),
    val=dict(
        type=dataset_type,
        ann_file=data_root + 'annotations/val2017.json',
        img_prefix=data_root + 'val2017/',
        img_scale=(1333, 800),
        img_norm_cfg=img_norm_cfg,
        size_divisor=32,
        flip_ratio=0,
        with_mask=False,
        with_crowd=True,
        with_label=True),
    test=dict(
        type=dataset_type,
        ann_file=data_root + 'annotations/val2017.json',
        img_prefix=data_root + 'val2017/',
        img_scale=(1333, 800),
        img_norm_cfg=img_norm_cfg,
        size_divisor=32,
        flip_ratio=0,
        with_mask=False,
        with_label=False,
        test_mode=True))
# optimizer
optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
# learning policy
lr_config = dict(
    policy='step',
    warmup='linear',
    warmup_iters=500,
    warmup_ratio=1.0 / 3,
    step=[8, 11])
checkpoint_config = dict(interval=1)
# yapf:disable
log_config = dict(
    interval=50,
    hooks=[
        dict(type='TextLoggerHook'),
        # dict(type='TensorboardLoggerHook')
    ])
# yapf:enable
# runtime settings
total_epochs = 12
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = './work_dirs/faster_rcnn_r50_fpn_1x'
load_from = None
resume_from = None
workflow = [('train', 1)]
hellock commented 5 years ago

I just tried this config and it works well.

pandigreat commented 5 years ago

Maybe there is something wrong on my dataset. My dataset is customized dataset with Coco format. I wanted to use the model to do detection on jpgs.

Here is my directory tree data └── coco ├── annotations |---val2017.json |---train2017.json ├── train2017 └── val2017

Here is my val2017.json

{"info": {"description": "This is dataset.", "version": "1.0", "year": 2018, "contributor": "Mmm2333", "date_created": "2018-01-27 09:11:52.357475"}, "categories": [{"supercategory": "Pi c", "id": 1, "name": "pic"}], "image": [{"license": 1, "file_name": "page0012_4.jpg", "height": 2050, "width": 1444, "date_captured": "2014-12-21 12:23:23", "id": 1}, {"license": 1, "file_name": "page 0017_4.jpg", "height": 2050, "width": 1444, "date_captured": "2014-12-21 12:23:23", "id": 2}, {"license": 1, "file_name": "page0018_4.jpg", "height": 2050, "width": 1444, "date_captured": "2014-12-21 12:23:23", "id": 3}, {"license": 1, "file_name": "page0019_4.jpg", "height": 2050, "width": 1444, "date_captured": "2014-12-21 12:23:23", "id": 4}, {"license": 1, "file_name": "page0020_4.jpg", "heigh t": 2050, "width": 1444, "date_captured": "2014-12-21 12:23:23", "id": 5}, {"license": 1, "file_name": "page0021_4.jpg", "height": 2050, "width": 1444, "date_captured": "2014-12-21 12:23:23", "id": 6} , {"license": 1, "file_name": "page0022_4.jpg", "height": 2050, "width": 1444, "date_captured": "2014-12-21 12:23:23", "id": 7}, {"license": 1, "file_name": "page0023_4.jpg", "height": 2050, "width": 1444, "date_captured": "2014-12-21 12:23:23", "id": 8}, {"license": 1, "file_name": "page0030_4.jpg", "height": 2050, "width": 1444, "date_captured": "2014-12-21 12:23:23", "id": 9}, {"license": 1, "f ile_name": "page0031_4.jpg", "height": 2050, "width": 1444, "date_captured": "2014-12-21 12:23:23", "id": 10}, {"license": 1, "file_name": "page0032_4.jpg", "height": 2050, "width": 1444, "date_captur ed": "2014-12-21 12:23:23", "id": 11}, {"license": 1, "file_name": "page0033_4.jpg", "height": 2050, "width": 1444, "date_captured": "2014-12-21 12:23:23", "id": 12}, {"license": 1, "file_name": "page 0034_4.jpg", "height": 2050, "width": 1444, "date_captured": "2014-12-21 12:23:23", "id": 13}, {"license": 1, "file_name": "page0035_4.jpg", "height": 2050, "width": 1444, "date_captured": "2014-12-21 12:23:23", "id": 14}, {"license": 1, "file_name": "page0036_4.jpg", "height": 2050, "width": 1444, "date_captured": "2014-12-21 12:23:23", "id": 15}, {"license": 1, "file_name": "page0037_4.jpg", "he ight": 2050, "width": 1444, "date_captured": "2014-12-21 12:23:23", "id": 16}, {"license": 1, "file_name": "page0038_4.jpg", "height": 2050, "width": 1444, "date_captured": "2014-12-21 12:23:23", "id" : 17}, {"license": 1, "file_name": "page0039_4.jpg", "height": 2050, "width": 1444, "date_captured": "2014-12-21 12:23:23", "id": 18}, {"license": 1, "file_name": "page0040_4.jpg", "height": 2050, "wi dth": 1444, "date_captured": "2014-12-21 12:23:23", "id": 19}, {"license": 1, "file_name": "page0041_4.jpg", "height": 2050, "width": 1444, "date_captured": "2014-12-21 12:23:23", "id": 20}], "license s": [{"url": "http://www.usa.gov/copyright.shtml", "id": 1, "name": "Mobvoi"}], "annotations": [{"id": 1, "category_id": 18, "image_id": 12, "bbox": [71, 181, 1270, 866], "segmentation": [[71, 181, 13 41, 181, 1341, 1047, 71, 1047]], "area": 1099820, "iscrowd": 0}, {"id": 2, "category_id": 18, "image_id": 12, "bbox": [65, 1055, 1272, 862], "segmentation": [[65, 1055, 1337, 1055, 1337, 1917, 65, 191 7]], "area": 1096464, "iscrowd": 0}, {"id": 3, "category_id": 18, "image_id": 17, "bbox": [115, 297, 1266, 562], "segmentation": [[115, 297, 1381, 297, 1381, 859, 115, 859]], "area": 711492, "iscrowd" : 0}, {"id": 4, "category_id": 18, "image_id": 17, "bbox": [109, 869, 1280, 556], "segmentation": [[109, 869, 1389, 869, 1389, 1425, 109, 1425]], "area": 711680, "iscrowd": 0}, {"id": 5, "category_id" : 18, "image_id": 18, "bbox": [727, 1555, 605, 382], "segmentation": [[727, 1555, 1332, 1555, 1332, 1937, 727, 1937]], "area": 231110, "iscrowd": 0}, {"id": 6, "category_id": 18, "image_id": 19, "bbox ": [114, 890, 610, 778], "segmentation": [[114, 890, 724, 890, 724, 1668, 114, 1668]], "area": 474580, "iscrowd": 0}, {"id": 7, "category_id": 18, "image_id": 20, "bbox": [87, 216, 1250, 560], "segmen tation": [[87, 216, 1337, 216, 1337, 776, 87, 776]], "area": 700000, "iscrowd": 0}, {"id": 8, "category_id": 18, "image_id": 20, "bbox": [727, 897, 602, 790], "segmentation": [[727, 897, 1329, 897, 13 29, 1687, 727, 1687]], "area": 475580, "iscrowd": 0}, {"id": 9, "category_id": 18, "image_id": 21, "bbox": [111, 108, 605, 355], "segmentation": [[111, 108, 716, 108, 716, 463, 111, 463]], "area": 214 775, "iscrowd": 0}, {"id": 10, "category_id": 18, "image_id": 21, "bbox": [761, 116, 613, 479], "segmentation": [[761, 116, 1374, 116, 1374, 595, 761, 595]], "area": 293627, "iscrowd": 0}, {"id": 11, "category_id": 18, "image_id": 22, "bbox": [77, 900, 608, 771], "segmentation": [[77, 900, 685, 900, 685, 1671, 77, 1671]], "area": 468768, "iscrowd": 0}, {"id": 12, "category_id": 18, "image_id": 23, "bbox": [114, 292, 605, 395], "segmentation": [[114, 292, 719, 292, 719, 687, 114, 687]], "area": 238975, "iscrowd": 0}, {"id": 13, "category_id": 18, "image_id": 23, "bbox": [764, 611, 608, 384], "s egmentation": [[764, 611, 1372, 611, 1372, 995, 764, 995]], "area": 233472, "iscrowd": 0}, {"id": 14, "category_id": 18, "image_id": 30, "bbox": [74, 276, 613, 395], "segmentation": [[74, 276, 687, 27 6, 687, 671, 74, 671]], "area": 242135, "iscrowd": 0}, {"id": 15, "category_id": 18, "image_id": 31, "bbox": [764, 726, 608, 390], "segmentation": [[764, 726, 1372, 726, 1372, 1116, 764, 1116]], "area ": 237120, "iscrowd": 0}, {"id": 16, "category_id": 18, "image_id": 32, "bbox": [727, 113, 605, 453], "segmentation": [[727, 113, 1332, 113, 1332, 566, 727, 566]], "area": 274065, "iscrowd": 0}, {"id" : 17, "category_id": 18, "image_id": 32, "bbox": [727, 179, 600, 382], "segmentation": [[727, 179, 1327, 179, 1327, 561, 727, 561]], "area": 229200, "iscrowd": 0}, {"id": 18, "category_id": 18, "image _id": 32, "bbox": [724, 1126, 608, 379], "segmentation": [[724, 1126, 1332, 1126, 1332, 1505, 724, 1505]], "area": 230432, "iscrowd": 0}, {"id": 19, "category_id": 18, "image_id": 32, "bbox": [724, 15 18, 603, 382], "segmentation": [[724, 1518, 1327, 1518, 1327, 1900, 724, 1900]], "area": 230346, "iscrowd": 0}, {"id": 20, "category_id": 18, "image_id": 33, "bbox": [119, 1200, 605, 390], "segmentati on": [[119, 1200, 724, 1200, 724, 1590, 119, 1590]], "area": 235950, "iscrowd": 0}, {"id": 21, "category_id": 18, "image_id": 33, "bbox": [766, 924, 600, 381], "segmentation": [[766, 924, 1366, 924, 1 366, 1305, 766, 1305]], "area": 228600, "iscrowd": 0}, {"id": 22, "category_id": 18, "image_id": 34, "bbox": [74, 1250, 611, 387], "segmentation": [[74, 1250, 685, 1250, 685, 1637, 74, 1637]], "area": 236457, "iscrowd": 0}, {"id": 23, "category_id": 18, "image_id": 35, "bbox": [114, 611, 615, 379], "segmentation": [[114, 611, 729, 611, 729, 990, 114, 990]], "area": 233085, "iscrowd": 0}, {"id": 24 , "category_id": 18, "image_id": 35, "bbox": [114, 1005, 608, 387], "segmentation": [[114, 1005, 722, 1005, 722, 1392, 114, 1392]], "area": 235296, "iscrowd": 0}, {"id": 25, "categoryid": 18, "image id": 36, "bbox": [722, 240, 615, 389], "segmentation": [[722, 240, 1337, 240, 1337, 629, 722, 629]], "area": 239235, "iscrowd": 0}, {"id": 26, "category_id": 18, "image_id": 37, "bbox": [106, 382, 618 , 384], "segmentation": [[106, 382, 724, 382, 724, 766, 106, 766]], "area": 237312, "iscrowd": 0}, {"id": 27, "category_id": 18, "image_id": 37, "bbox": [108, 779, 614, 403], "segmentation": [[108, 77 9, 722, 779, 722, 1182, 108, 1182]], "area": 247442, "iscrowd": 0}, {"id": 28, "category_id": 18, "image_id": 38, "bbox": [69, 987, 618, 395], "segmentation": [[69, 987, 687, 987, 687, 1382, 69, 1382] ], "area": 244110, "iscrowd": 0}, {"id": 29, "category_id": 18, "image_id": 39, "bbox": [114, 213, 610, 382], "segmentation": [[114, 213, 724, 213, 724, 595, 114, 595]], "area": 233020, "iscrowd": 0}, {"id": 30, "category_id": 18, "image_id": 39, "bbox": [106, 611, 623, 384], "segmentation": [[106, 611, 729, 611, 729, 995, 106, 995]], "area": 239232, "iscrowd": 0}, {"id": 31, "category_id": 18, "i mage_id": 39, "bbox": [761, 213, 616, 390], "segmentation": [[761, 213, 1377, 213, 1377, 603, 761, 603]], "area": 240240, "iscrowd": 0}, {"id": 32, "category_id": 18, "image_id": 39, "bbox": [761, 968 , 621, 393], "segmentation": [[761, 968, 1382, 968, 1382, 1361, 761, 1361]], "area": 244053, "iscrowd": 0}, {"id": 33, "category_id": 18, "image_id": 39, "bbox": [764, 1366, 615, 424], "segmentation": [[764, 1366, 1379, 1366, 1379, 1790, 764, 1790]], "area": 260760, "iscrowd": 0}, {"id": 34, "category_id": 18, "image_id": 40, "bbox": [716, 297, 627, 393], "segmentation": [[716, 297, 1343, 297, 134 3, 690, 716, 690]], "area": 246411, "iscrowd": 0}, {"id": 35, "category_id": 18, "image_id": 41, "bbox": [758, 274, 624, 392], "segmentation": [[758, 274, 1382, 274, 1382, 666, 758, 666]], "area": 244 608, "iscrowd": 0}]}

pandigreat commented 5 years ago

Ok , I fixed my bugs. I mixed up width and height params in annotations. Thank you guys.

CodeXiaoLingYun commented 4 years ago

Ok , I fixed my bugs. I mixed up width and height params in annotations. Thank you guys.

i have the same question,but i can not understand how you solved the problem. can you tell me where you change

TrachIvan commented 4 years ago

Hi,

For me I resolved classes=classes, from training dict in coco_detection.py, but then I have got test results is empty so seems that DetectorRS still has some issues on mmdet2.0.

Best Regards, Ivan

alaa-shubbak commented 3 years ago

i have the same problem , i could not fix it . my images are in (.png) format . could it be this format my problem ? should i change (jpg) to (png) in some config files ?