open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.21k stars 9.4k forks source link

ValueError: need at least one array to concatenate #1020

Closed YCICI closed 4 years ago

YCICI commented 5 years ago

I have this error,I guess my data may be wrong. but I don't know how to correct. Anyone else can help me?

YCICI commented 5 years ago

this is my error

2019-07-18 14:42:38,875 - INFO - Distributed training: False 2019-07-18 14:42:39,383 - INFO - load model from: modelzoo://resnet101 2019-07-18 14:42:39,561 - WARNING - unexpected key in source state_dict: fc.weight, fc.bias

missing keys in source state_dict: layer3.22.bn2.num_batches_tracked, layer4.2.bn2.num_batches_tracked, layer2.1.bn3.num_batches_tracked, layer3.20.bn1.num_batches_tracked, layer3.6.bn3.num_batches_tracked, layer3.21.bn3.num_batches_tracked, layer3.9.bn3.num_batches_tracked, layer3.14.bn3.num_batches_tracked, layer3.13.bn3.num_batches_tracked, layer3.15.bn1.num_batches_tracked, layer3.18.bn2.num_batches_tracked, layer3.0.bn3.num_batches_tracked, layer4.0.bn1.num_batches_tracked, layer1.0.bn3.num_batches_tracked, layer2.1.bn1.num_batches_tracked, layer3.15.bn2.num_batches_tracked, layer3.0.downsample.1.num_batches_tracked, layer4.1.bn3.num_batches_tracked, layer3.7.bn3.num_batches_tracked, layer3.3.bn3.num_batches_tracked, layer3.6.bn1.num_batches_tracked, layer3.17.bn3.num_batches_tracked, layer3.19.bn1.num_batches_tracked, layer1.0.bn2.num_batches_tracked, layer2.2.bn1.num_batches_tracked, layer3.18.bn1.num_batches_tracked, layer2.1.bn2.num_batches_tracked, layer1.2.bn1.num_batches_tracked, layer4.1.bn2.num_batches_tracked, layer1.1.bn3.num_batches_tracked, layer3.14.bn2.num_batches_tracked, layer3.16.bn2.num_batches_tracked, layer3.4.bn1.num_batches_tracked, layer4.2.bn3.num_batches_tracked, layer3.10.bn3.num_batches_tracked, layer3.12.bn2.num_batches_tracked, layer3.13.bn2.num_batches_tracked, layer3.1.bn1.num_batches_tracked, layer3.7.bn1.num_batches_tracked, bn1.num_batches_tracked, layer1.1.bn2.num_batches_tracked, layer3.20.bn3.num_batches_tracked, layer3.19.bn3.num_batches_tracked, layer1.1.bn1.num_batches_tracked, layer3.12.bn1.num_batches_tracked, layer1.0.downsample.1.num_batches_tracked, layer3.4.bn2.num_batches_tracked, layer4.0.bn2.num_batches_tracked, layer2.2.bn3.num_batches_tracked, layer3.13.bn1.num_batches_tracked, layer3.7.bn2.num_batches_tracked, layer3.5.bn1.num_batches_tracked, layer3.21.bn2.num_batches_tracked, layer2.0.downsample.1.num_batches_tracked, layer3.11.bn2.num_batches_tracked, layer3.19.bn2.num_batches_tracked, layer3.1.bn3.num_batches_tracked, layer4.2.bn1.num_batches_tracked, layer2.0.bn2.num_batches_tracked, layer3.14.bn1.num_batches_tracked, layer3.9.bn1.num_batches_tracked, layer3.17.bn2.num_batches_tracked, layer3.22.bn3.num_batches_tracked, layer2.0.bn3.num_batches_tracked, layer3.2.bn1.num_batches_tracked, layer2.0.bn1.num_batches_tracked, layer3.12.bn3.num_batches_tracked, layer3.1.bn2.num_batches_tracked, layer3.20.bn2.num_batches_tracked, layer3.5.bn3.num_batches_tracked, layer3.22.bn1.num_batches_tracked, layer2.3.bn1.num_batches_tracked, layer2.3.bn2.num_batches_tracked, layer1.2.bn3.num_batches_tracked, layer3.16.bn1.num_batches_tracked, layer3.8.bn3.num_batches_tracked, layer2.2.bn2.num_batches_tracked, layer3.3.bn2.num_batches_tracked, layer3.10.bn1.num_batches_tracked, layer3.9.bn2.num_batches_tracked, layer3.2.bn2.num_batches_tracked, layer3.21.bn1.num_batches_tracked, layer3.11.bn3.num_batches_tracked, layer3.4.bn3.num_batches_tracked, layer3.0.bn1.num_batches_tracked, layer3.5.bn2.num_batches_tracked, layer3.3.bn1.num_batches_tracked, layer4.0.downsample.1.num_batches_tracked, layer2.3.bn3.num_batches_tracked, layer3.0.bn2.num_batches_tracked, layer3.18.bn3.num_batches_tracked, layer1.0.bn1.num_batches_tracked, layer3.16.bn3.num_batches_tracked, layer3.11.bn1.num_batches_tracked, layer3.8.bn2.num_batches_tracked, layer4.0.bn3.num_batches_tracked, layer3.10.bn2.num_batches_tracked, layer3.17.bn1.num_batches_tracked, layer4.1.bn1.num_batches_tracked, layer3.8.bn1.num_batches_tracked, layer3.15.bn3.num_batches_tracked, layer3.2.bn3.num_batches_tracked, layer3.6.bn2.num_batches_tracked, layer1.2.bn2.num_batches_tracked

loading annotations into memory... Done (t=2.06s) creating index... index created! 2019-07-18 14:42:43,392 - INFO - Start running, host: yyy@yyy-MS-7A71, work_dir: /home/yyy/mmdetection/work_dirs 2019-07-18 14:42:43,392 - INFO - workflow: [('train', 1)], max: 12 epochs Traceback (most recent call last): File "tools/train.py", line 98, in main() File "tools/train.py", line 94, in main logger=logger) File "/home/yyy/mmdetection/mmdet/apis/train.py", line 62, in train_detector _non_dist_train(model, dataset, cfg, validate=validate) File "/home/yyy/mmdetection/mmdet/apis/train.py", line 223, in _non_dist_train runner.run(data_loaders, cfg.workflow, cfg.total_epochs) File "/home/yyy/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/runner.py", line 358, in run epoch_runner(data_loaders[i], **kwargs) File "/home/yyy/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/runner.py", line 260, in train for i, data_batch in enumerate(data_loader): File "/home/yyy/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch-1.1.0-py3.7-linux-x86_64.egg/torch/utils/data/dataloader.py", line 193, in iter return _DataLoaderIter(self) File "/home/yyy/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch-1.1.0-py3.7-linux-x86_64.egg/torch/utils/data/dataloader.py", line 493, in init self._put_indices() File "/home/yyy/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch-1.1.0-py3.7-linux-x86_64.egg/torch/utils/data/dataloader.py", line 591, in _put_indices indices = next(self.sample_iter, None) File "/home/yyy/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch-1.1.0-py3.7-linux-x86_64.egg/torch/utils/data/sampler.py", line 172, in iter for idx in self.sampler: File "/home/yyy/mmdetection/mmdet/datasets/loader/sampler.py", line 63, in iter indices = np.concatenate(indices) ValueError: need at least one array to concatenate

YCICI commented 5 years ago

and this is my val512.json

{ "images": [ { "id": 1, "file_name": "D:/z_personal_file/com/datase2coco/validation/fe600639ac5f36c1.jpg", "height": 341, "width": 512 }, { "id": 2, "file_name": "D:/z_personal_file/com/datase2coco/validation/ba82c70cc6cdf449.jpg", "height": 512, "width": 277 }, { "id": 3, "file_name": "D:/z_personal_file/com/datase2coco/validation/e3ffa4c868b11b15.jpg", "height": 307, "width": 512 }, { "id": 4, "file_name": "D:/z_personal_file/com/datase2coco/validation/7d00af2927a57eeb.jpg", "height": 384, "width": 512 }, { "id": 5, "file_name": "D:/z_personal_file/com/datase2coco/validation/914dd6fb5eb17e85.jpg", "height": 512, "width": 362 }, { "id": 6, "file_name": "D:/z_personal_file/com/datase2coco/validation/eb59a7a5d5518d31.jpg", "height": 342, "width": 512 }, { "id": 7, "file_name": "D:/z_personal_file/com/datase2coco/validation/4ea213d7e78c2d41.jpg", "height": 512, "width": 512 }, { "id": 8, "file_name": "D:/z_personal_file/com/datase2coco/validation/392e97d541b21104.jpg", "height": 384, "width": 512 }, { "id": 9, "file_name": "D:/z_personal_file/com/datase2coco/validation/2673e8f3d459e3bf.jpg", "height": 340, "width": 512 }, { "id": 10, "file_name": "D:/z_personal_file/com/datase2coco/validation/2519f63ec4883bd2.jpg", "height": 384, "width": 512 }, { "id": 11, "file_name": "D:/z_personal_file/com/datase2coco/validation/61ead689dba1ebc5.jpg", "height": 384, "width": 512 }, { "id": 12, "file_name": "D:/z_personal_file/com/datase2coco/validation/2f9ea40fc5161426.jpg", "height": 512, "width": 366 }, { "id": 13, "file_name": "D:/z_personal_file/com/datase2coco/validation/6a5a69bc0ed96330.jpg", "height": 512, "width": 384 }, { "id": 14, "file_name": "D:/z_personal_file/com/datase2coco/validation/2fed663b4eb60fc8.jpg", "height": 342, "width": 512 }, { "id": 15, "file_name": "D:/z_personal_file/com/datase2coco/validation/9728bc1301475043.jpg", "height": 512, "width": 383 }, { "id": 16, "file_name": "D:/z_personal_file/com/datase2coco/validation/1a229f63f9f03f91.jpg", "height": 339, "width": 512 }, { "id": 17, "file_name": "D:/z_personal_file/com/datase2coco/validation/63e3c93d6afbdbde.jpg", "height": 342, "width": 512 }, { "id": 18, "file_name": "D:/z_personal_file/com/datase2coco/validation/9c7e8b7f827cf707.jpg", "height": 342, "width": 512 }, { "id": 19, "file_name": "D:/z_personal_file/com/datase2coco/validation/f337b78b051260cb.jpg", "height": 342, "width": 512 }, { "id": 20, "file_name": "D:/z_personal_file/com/datase2coco/validation/b0a63dfc85045447.jpg", "height": 363, "width": 512 }, { "id": 21, "file_name": "D:/z_personal_file/com/datase2coco/validation/ac64309938699c5f.jpg", "height": 384, "width": 512 }, { "id": 22, "file_name": "D:/z_personal_file/com/datase2coco/validation/25a694ee730ba3c2.jpg", "height": 366, "width": 512 }, { "id": 23, "file_name": "D:/z_personal_file/com/datase2coco/validation/2db95c7de25d0142.jpg", "height": 342, "width": 512 }, { "id": 24, "file_name": "D:/z_personal_file/com/datase2coco/validation/c40a139fa37b26e6.jpg", "height": 341, "width": 512 }, { "id": 25, "file_name": "D:/z_personal_file/com/datase2coco/validation/2f6f39b9c34f5377.jpg", "height": 366, "width": 512 }, { "id": 26, "file_name": "D:/z_personal_file/com/datase2coco/validation/f70e3e9e09d8f672.jpg", "height": 342, "width": 512 }, { "id": 27, "file_name": "D:/z_personal_file/com/datase2coco/validation/b502185ab84ddf0f.jpg", "height": 340, "width": 512 }, { "id": 28, "file_name": "D:/z_personal_file/com/datase2coco/validation/4de837768914b2a0.jpg", "height": 512, "width": 371 }, ……………… "id": 18427, "area": 0.0002636718750000007, "iscrowd": 0, "image_id": 39044 }, { "segmentation": [ [ 0.052083332, 0.0, 0.052083332, 0.2046875, 0.052083332, 0.409375, 0.11979166599999999, 0.409375, 0.1875, 0.409375, 0.1875, 0.2046875, 0.1875, 0.0, 0.119791666, 0.0 ] ], "bbox": [ 0.052083332, 0.409375, 0.135416668, 0.409375 ], "id": 18428, "area": 0.055436198462499996, "iscrowd": 0, "image_id": 39044 }, { "segmentation": [ [ 0.0, 0.020876827, 0.0, 0.46033403349999996, 0.0, 0.89979124, 0.48984375, 0.89979124, 0.9796875, 0.89979124, 0.9796875, 0.4603340335, 0.9796875, 0.020876827, 0.48984375, 0.020876827 ] ], "bbox": [ 0.0, 0.89979124, 0.9796875, 0.878914413 ], "id": 18429, "area": 0.8610614639859375, "iscrowd": 0, "image_id": 15757 }, …………………… { "supercategory": "d48da3382d52ca0f", "id": 214032, "name": "verification" }, { "supercategory": "d48da3382d52ca0f", "id": 214033, "name": "verification" }, { "supercategory": "d48da3382d52ca0f", "id": 214034, "name": "verification" }, { "supercategory": "d48da3382d52ca0f", "id": 214035, "name": "verification" }, { "supercategory": "d48da3382d52ca0f", "id": 214036, "name": "verification" }, { "supercategory": "d48da3382d52ca0f", "id": 214037, "name": "verification" }, { "supercategory": "d48f45080b3137a8", "id": 214038, "name": "verification" }, { "supercategory": "d48f45080b3137a8", "id": 214039, "name": "verification" }, { "supercategory": "d48f45080b3137a8", "id": 214040, "name": "verification" }, { "supercategory": "d49336702fda8a09", "id": 214041, "name": "verification" }, { "supercategory": "d49336702fda8a09", "id": 214042, "name": "verification" }, { "supercategory": "d49336702fda8a09", "id": 214043, "name": "verification" },

YCICI commented 5 years ago

and this is my configs:

model settings

model = dict( type='FasterRCNN', pretrained='modelzoo://resnet101', backbone=dict( type='ResNet', depth=101, num_stages=4, out_indices=(0, 1, 2, 3), frozen_stages=1, style='pytorch'), neck=dict( type='FPN', in_channels=[256, 512, 1024, 2048], out_channels=256, num_outs=5), rpn_head=dict( type='RPNHead', in_channels=256, feat_channels=256, anchor_scales=[8], anchor_ratios=[0.5, 1.0, 2.0], anchor_strides=[4, 8, 16, 32, 64], target_means=[.0, .0, .0, .0], target_stds=[1.0, 1.0, 1.0, 1.0], loss_cls=dict( type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)), bbox_roi_extractor=dict( type='SingleRoIExtractor', roi_layer=dict(type='RoIAlign', out_size=7, sample_num=2), out_channels=256, featmap_strides=[4, 8, 16, 32]), bbox_head=dict( type='SharedFCBBoxHead', num_fcs=2, in_channels=256, fc_out_channels=1024, roi_feat_size=7, num_classes=601,################## target_means=[0., 0., 0., 0.], target_stds=[0.1, 0.1, 0.2, 0.2], reg_class_agnostic=False, loss_cls=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)))

model training and testing settings

train_cfg = dict( rpn=dict( assigner=dict( type='MaxIoUAssigner', pos_iou_thr=0.7,########################################### neg_iou_thr=0.3, min_pos_iou=0.3, ignore_iof_thr=-1), sampler=dict( type='RandomSampler', num=256, pos_fraction=0.5, neg_pos_ub=-1, add_gt_as_proposals=False), allowed_border=0, pos_weight=-1, debug=False), rpn_proposal=dict( nms_across_levels=False, nms_pre=2000, nms_post=2000, max_num=2000, nms_thr=0.7, min_bbox_size=0), rcnn=dict( assigner=dict( type='MaxIoUAssigner', pos_iou_thr=0.5, neg_iou_thr=0.5, min_pos_iou=0.5, ignore_iof_thr=-1), sampler=dict( type='RandomSampler', num=512, pos_fraction=0.25, neg_pos_ub=-1, add_gt_as_proposals=True), pos_weight=-1, debug=False)) test_cfg = dict( rpn=dict( nms_across_levels=False, nms_pre=1000, nms_post=1000, max_num=1000, nms_thr=0.7, min_bbox_size=0), rcnn=dict( score_thr=0.05, nms=dict(type='nms', iou_thr=0.5), max_per_img=100)

soft-nms is also supported for rcnn testing

# e.g., nms=dict(type='soft_nms', iou_thr=0.5, min_score=0.05)

)

dataset settings

dataset_type = 'CocoDataset'

data_root = 'data/coco/'

dataset_type = 'MyDataset' data_root = 'data/openimages/' img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) data = dict( imgs_per_gpu=2, workers_per_gpu=2, train=dict( type=dataset_type, ann_file=data_root + 'annotations/val512.json', img_prefix=data_root + 'train2019/', img_scale=(1024, 1024),####################### img_norm_cfg=img_norm_cfg, size_divisor=32, flip_ratio=0.5, with_mask=False, with_crowd=True, with_label=True), val=dict( type=dataset_type, ann_file=data_root + '/annotation/val512.json', img_prefix=data_root + 'validation/', img_scale=(1024, 1024),################################ img_norm_cfg=img_norm_cfg, size_divisor=32, flip_ratio=0, with_mask=False, with_crowd=True, with_label=True), test=dict( type=dataset_type, ann_file=data_root + 'val512.json', img_prefix=data_root + 'validation/', img_scale=(1024, 1024),#################################### img_norm_cfg=img_norm_cfg, size_divisor=32, flip_ratio=0, with_mask=False, with_label=False, test_mode=True))

optimizer

optimizer = dict(type='SGD', lr=0.0025, momentum=0.9, weight_decay=0.0001) ############ optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))

learning policy

lr_config = dict( policy='step', warmup='linear', warmup_iters=500, warmup_ratio=1.0 / 3, step=[8, 11]) checkpoint_config = dict(interval=1)

yapf:disable

log_config = dict( interval=50, hooks=[ dict(type='TextLoggerHook'),

dict(type='TensorboardLoggerHook')

])

yapf:enable

runtime settings

total_epochs = 12 dist_params = dict(backend='nccl') log_level = 'INFO' work_dir = './work_dirs/faster_rcnn_r101_fpn_1x' load_from = None resume_from = None workflow = [('train', 1)]

hellock commented 5 years ago

Please follow the Error report issue template.

YadongLau commented 4 years ago

Please follow the Error report issue template.

Hi guys, Did you solve this problem?

ZwwWayne commented 4 years ago

https://github.com/open-mmlab/mmdetection/issues/210

YadongLau commented 4 years ago

Thank you very much!

发自我的iPhone

在 2019年12月7日,下午2:15,Wenwei Zhang notifications@github.com 写道:



210

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.