open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.1k stars 9.38k forks source link

DetectorRS_HTC_R50_1x_coco error: need at least one array to concatenate #4130

Closed itruonghai closed 3 years ago

itruonghai commented 3 years ago

Here is my config. I want to train an object detector with 7 classes.

Config:
dataset_type = 'CocoDataset'
data_root = ''
classes = ('1', '2', '3', '4', '5', '6', '7')
img_norm_cfg = dict(
    mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(
        type='LoadAnnotations', with_bbox=True, with_mask=True, with_seg=True),
    dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
    dict(type='RandomFlip', flip_ratio=0.5),
    dict(
        type='Normalize',
        mean=[123.675, 116.28, 103.53],
        std=[58.395, 57.12, 57.375],
        to_rgb=True),
    dict(type='Pad', size_divisor=32),
    dict(type='SegRescale', scale_factor=0.125),
    dict(type='DefaultFormatBundle'),
    dict(
        type='Collect',
        keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks', 'gt_semantic_seg'])
]
test_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(
        type='MultiScaleFlipAug',
        img_scale=(1333, 800),
        flip=False,
        transforms=[
            dict(type='Resize', keep_ratio=True),
            dict(type='RandomFlip', flip_ratio=0.5),
            dict(
                type='Normalize',
                mean=[123.675, 116.28, 103.53],
                std=[58.395, 57.12, 57.375],
                to_rgb=True),
            dict(type='Pad', size_divisor=32),
            dict(type='ImageToTensor', keys=['img']),
            dict(type='Collect', keys=['img'])
        ])
]
data = dict(
    samples_per_gpu=2,
    workers_per_gpu=2,
    train=dict(
        type='CocoDataset',
        classes=('1', '2', '3', '4', '5', '6', '7'),
        ann_file='/content/dataset/coco/annotations/train.json',
        img_prefix='/content/dataset/coco/train',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(
                type='LoadAnnotations',
                with_bbox=True,
                with_mask=True,
                with_seg=True),
            dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
            dict(type='RandomFlip', flip_ratio=0.5),
            dict(
                type='Normalize',
                mean=[123.675, 116.28, 103.53],
                std=[58.395, 57.12, 57.375],
                to_rgb=True),
            dict(type='Pad', size_divisor=32),
            dict(type='SegRescale', scale_factor=0.125),
            dict(type='DefaultFormatBundle'),
            dict(
                type='Collect',
                keys=[
                    'img', 'gt_bboxes', 'gt_labels', 'gt_masks',
                    'gt_semantic_seg'
                ])
        ],
        seg_prefix='',
        data_root=''),
    val=dict(
        type='CocoDataset',
        classes=('1', '2', '3', '4', '5', '6', '7'),
        ann_file='/content/dataset/coco/annotations/val.json',
        img_prefix='/content/dataset/coco/val',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(
                type='MultiScaleFlipAug',
                img_scale=(1333, 800),
                flip=False,
                transforms=[
                    dict(type='Resize', keep_ratio=True),
                    dict(type='RandomFlip', flip_ratio=0.5),
                    dict(
                        type='Normalize',
                        mean=[123.675, 116.28, 103.53],
                        std=[58.395, 57.12, 57.375],
                        to_rgb=True),
                    dict(type='Pad', size_divisor=32),
                    dict(type='ImageToTensor', keys=['img']),
                    dict(type='Collect', keys=['img'])
                ])
        ],
        data_root='',
        seg_prefix=''),
    test=dict(
        type='CocoDataset',
        classes=('1', '2', '3', '4', '5', '6', '7'),
        ann_file='/content/dataset/coco/annotations/test.json',
        img_prefix='/content/dataset/coco/test',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(
                type='MultiScaleFlipAug',
                img_scale=(1333, 800),
                flip=False,
                transforms=[
                    dict(type='Resize', keep_ratio=True),
                    dict(type='RandomFlip', flip_ratio=0.5),
                    dict(
                        type='Normalize',
                        mean=[123.675, 116.28, 103.53],
                        std=[58.395, 57.12, 57.375],
                        to_rgb=True),
                    dict(type='Pad', size_divisor=32),
                    dict(type='ImageToTensor', keys=['img']),
                    dict(type='Collect', keys=['img'])
                ])
        ],
        data_root='',
        seg_prefix=''))
evaluation = dict(metric=['bbox', 'segm'])
optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=None)
lr_config = dict(
    policy='step',
    warmup=None,
    warmup_iters=500,
    warmup_ratio=0.001,
    step=[8, 11])
total_epochs = 100
checkpoint_config = dict(interval=1)
log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')])
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = '/content/mmdetection/checkpoints/detectors_htc_r50_1x_coco-329b1453.pth'
resume_from = None
workflow = [('train', 1)]
model = dict(
    type='HybridTaskCascade',
    pretrained='torchvision://resnet50',
    backbone=dict(
        type='DetectoRS_ResNet',
        depth=50,
        num_stages=4,
        out_indices=(0, 1, 2, 3),
        frozen_stages=1,
        norm_cfg=dict(type='BN', requires_grad=True),
        norm_eval=True,
        style='pytorch',
        conv_cfg=dict(type='ConvAWS'),
        sac=dict(type='SAC', use_deform=True),
        stage_with_sac=(False, True, True, True),
        output_img=True),
    neck=dict(
        type='RFP',
        in_channels=[256, 512, 1024, 2048],
        out_channels=256,
        num_outs=5,
        rfp_steps=2,
        aspp_out_channels=64,
        aspp_dilations=(1, 3, 6, 1),
        rfp_backbone=dict(
            rfp_inplanes=256,
            type='DetectoRS_ResNet',
            depth=50,
            num_stages=4,
            out_indices=(0, 1, 2, 3),
            frozen_stages=1,
            norm_cfg=dict(type='BN', requires_grad=True),
            norm_eval=True,
            conv_cfg=dict(type='ConvAWS'),
            sac=dict(type='SAC', use_deform=True),
            stage_with_sac=(False, True, True, True),
            pretrained='torchvision://resnet50',
            style='pytorch')),
    rpn_head=dict(
        type='RPNHead',
        in_channels=256,
        feat_channels=256,
        anchor_generator=dict(
            type='AnchorGenerator',
            scales=[8],
            ratios=[0.5, 1.0, 2.0],
            strides=[4, 8, 16, 32, 64]),
        bbox_coder=dict(
            type='DeltaXYWHBBoxCoder',
            target_means=[0.0, 0.0, 0.0, 0.0],
            target_stds=[1.0, 1.0, 1.0, 1.0]),
        loss_cls=dict(
            type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
        loss_bbox=dict(
            type='SmoothL1Loss', beta=0.1111111111111111, loss_weight=1.0)),
    roi_head=dict(
        type='HybridTaskCascadeRoIHead',
        interleaved=True,
        mask_info_flow=True,
        num_stages=3,
        stage_loss_weights=[1, 0.5, 0.25],
        bbox_roi_extractor=dict(
            type='SingleRoIExtractor',
            roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
            out_channels=256,
            featmap_strides=[4, 8, 16, 32]),
        bbox_head=[
            dict(
                type='Shared2FCBBoxHead',
                in_channels=256,
                fc_out_channels=1024,
                roi_feat_size=7,
                num_classes=7,
                bbox_coder=dict(
                    type='DeltaXYWHBBoxCoder',
                    target_means=[0.0, 0.0, 0.0, 0.0],
                    target_stds=[0.1, 0.1, 0.2, 0.2]),
                reg_class_agnostic=True,
                loss_cls=dict(
                    type='CrossEntropyLoss',
                    use_sigmoid=False,
                    loss_weight=1.0),
                loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
                               loss_weight=1.0)),
            dict(
                type='Shared2FCBBoxHead',
                in_channels=256,
                fc_out_channels=1024,
                roi_feat_size=7,
                num_classes=7,
                bbox_coder=dict(
                    type='DeltaXYWHBBoxCoder',
                    target_means=[0.0, 0.0, 0.0, 0.0],
                    target_stds=[0.05, 0.05, 0.1, 0.1]),
                reg_class_agnostic=True,
                loss_cls=dict(
                    type='CrossEntropyLoss',
                    use_sigmoid=False,
                    loss_weight=1.0),
                loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
                               loss_weight=1.0)),
            dict(
                type='Shared2FCBBoxHead',
                in_channels=256,
                fc_out_channels=1024,
                roi_feat_size=7,
                num_classes=7,
                bbox_coder=dict(
                    type='DeltaXYWHBBoxCoder',
                    target_means=[0.0, 0.0, 0.0, 0.0],
                    target_stds=[0.033, 0.033, 0.067, 0.067]),
                reg_class_agnostic=True,
                loss_cls=dict(
                    type='CrossEntropyLoss',
                    use_sigmoid=False,
                    loss_weight=1.0),
                loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))
        ],
        mask_roi_extractor=dict(
            type='SingleRoIExtractor',
            roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0),
            out_channels=256,
            featmap_strides=[4, 8, 16, 32]),
        mask_head=[
            dict(
                type='HTCMaskHead',
                with_conv_res=False,
                num_convs=4,
                in_channels=256,
                conv_out_channels=256,
                num_classes=80,
                loss_mask=dict(
                    type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)),
            dict(
                type='HTCMaskHead',
                num_convs=4,
                in_channels=256,
                conv_out_channels=256,
                num_classes=80,
                loss_mask=dict(
                    type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)),
            dict(
                type='HTCMaskHead',
                num_convs=4,
                in_channels=256,
                conv_out_channels=256,
                num_classes=80,
                loss_mask=dict(
                    type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))
        ],
        semantic_roi_extractor=dict(
            type='SingleRoIExtractor',
            roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0),
            out_channels=256,
            featmap_strides=[8]),
        semantic_head=dict(
            type='FusedSemanticHead',
            num_ins=5,
            fusion_level=1,
            num_convs=4,
            in_channels=256,
            conv_out_channels=256,
            num_classes=183,
            ignore_label=255,
            loss_weight=0.2)))
train_cfg = dict(
    rpn=dict(
        assigner=dict(
            type='MaxIoUAssigner',
            pos_iou_thr=0.7,
            neg_iou_thr=0.3,
            min_pos_iou=0.3,
            ignore_iof_thr=-1),
        sampler=dict(
            type='RandomSampler',
            num=256,
            pos_fraction=0.5,
            neg_pos_ub=-1,
            add_gt_as_proposals=False),
        allowed_border=0,
        pos_weight=-1,
        debug=False),
    rpn_proposal=dict(
        nms_across_levels=False,
        nms_pre=2000,
        nms_post=2000,
        max_num=2000,
        nms_thr=0.7,
        min_bbox_size=0),
    rcnn=[
        dict(
            assigner=dict(
                type='MaxIoUAssigner',
                pos_iou_thr=0.5,
                neg_iou_thr=0.5,
                min_pos_iou=0.5,
                ignore_iof_thr=-1),
            sampler=dict(
                type='RandomSampler',
                num=512,
                pos_fraction=0.25,
                neg_pos_ub=-1,
                add_gt_as_proposals=True),
            mask_size=28,
            pos_weight=-1,
            debug=False),
        dict(
            assigner=dict(
                type='MaxIoUAssigner',
                pos_iou_thr=0.6,
                neg_iou_thr=0.6,
                min_pos_iou=0.6,
                ignore_iof_thr=-1),
            sampler=dict(
                type='RandomSampler',
                num=512,
                pos_fraction=0.25,
                neg_pos_ub=-1,
                add_gt_as_proposals=True),
            mask_size=28,
            pos_weight=-1,
            debug=False),
        dict(
            assigner=dict(
                type='MaxIoUAssigner',
                pos_iou_thr=0.7,
                neg_iou_thr=0.7,
                min_pos_iou=0.7,
                ignore_iof_thr=-1),
            sampler=dict(
                type='RandomSampler',
                num=512,
                pos_fraction=0.25,
                neg_pos_ub=-1,
                add_gt_as_proposals=True),
            mask_size=28,
            pos_weight=-1,
            debug=False)
    ])
test_cfg = dict(
    rpn=dict(
        nms_across_levels=False,
        nms_pre=1000,
        nms_post=1000,
        max_num=1000,
        nms_thr=0.7,
        min_bbox_size=0),
    rcnn=dict(
        score_thr=0.001,
        nms=dict(type='nms', iou_threshold=0.5),
        max_per_img=100,
        mask_thr_binary=0.5))
work_dir = './tutorial_exps'
seed = 0
gpu_ids = range(0, 1)

And the error is

loading annotations into memory...
Done (t=0.04s)
creating index...
index created!
2020-11-17 12:56:22,031 - mmdet - INFO - load model from: torchvision://resnet50
2020-11-17 12:56:22,399 - mmdet - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: fc.weight, fc.bias

missing keys in source state_dict: layer2.0.conv2.weight_diff, layer2.0.conv2.switch.weight, layer2.0.conv2.switch.bias, layer2.0.conv2.pre_context.weight, layer2.0.conv2.pre_context.bias, layer2.0.conv2.post_context.weight, layer2.0.conv2.post_context.bias, layer2.0.conv2.offset_s.weight, layer2.0.conv2.offset_s.bias, layer2.0.conv2.offset_l.weight, layer2.0.conv2.offset_l.bias, layer2.1.conv2.weight_diff, layer2.1.conv2.switch.weight, layer2.1.conv2.switch.bias, layer2.1.conv2.pre_context.weight, layer2.1.conv2.pre_context.bias, layer2.1.conv2.post_context.weight, layer2.1.conv2.post_context.bias, layer2.1.conv2.offset_s.weight, layer2.1.conv2.offset_s.bias, layer2.1.conv2.offset_l.weight, layer2.1.conv2.offset_l.bias, layer2.2.conv2.weight_diff, layer2.2.conv2.switch.weight, layer2.2.conv2.switch.bias, layer2.2.conv2.pre_context.weight, layer2.2.conv2.pre_context.bias, layer2.2.conv2.post_context.weight, layer2.2.conv2.post_context.bias, layer2.2.conv2.offset_s.weight, layer2.2.conv2.offset_s.bias, layer2.2.conv2.offset_l.weight, layer2.2.conv2.offset_l.bias, layer2.3.conv2.weight_diff, layer2.3.conv2.switch.weight, layer2.3.conv2.switch.bias, layer2.3.conv2.pre_context.weight, layer2.3.conv2.pre_context.bias, layer2.3.conv2.post_context.weight, layer2.3.conv2.post_context.bias, layer2.3.conv2.offset_s.weight, layer2.3.conv2.offset_s.bias, layer2.3.conv2.offset_l.weight, layer2.3.conv2.offset_l.bias, layer3.0.conv2.weight_diff, layer3.0.conv2.switch.weight, layer3.0.conv2.switch.bias, layer3.0.conv2.pre_context.weight, layer3.0.conv2.pre_context.bias, layer3.0.conv2.post_context.weight, layer3.0.conv2.post_context.bias, layer3.0.conv2.offset_s.weight, layer3.0.conv2.offset_s.bias, layer3.0.conv2.offset_l.weight, layer3.0.conv2.offset_l.bias, layer3.1.conv2.weight_diff, layer3.1.conv2.switch.weight, layer3.1.conv2.switch.bias, layer3.1.conv2.pre_context.weight, layer3.1.conv2.pre_context.bias, layer3.1.conv2.post_context.weight, layer3.1.conv2.post_context.bias, layer3.1.conv2.offset_s.weight, layer3.1.conv2.offset_s.bias, layer3.1.conv2.offset_l.weight, layer3.1.conv2.offset_l.bias, layer3.2.conv2.weight_diff, layer3.2.conv2.switch.weight, layer3.2.conv2.switch.bias, layer3.2.conv2.pre_context.weight, layer3.2.conv2.pre_context.bias, layer3.2.conv2.post_context.weight, layer3.2.conv2.post_context.bias, layer3.2.conv2.offset_s.weight, layer3.2.conv2.offset_s.bias, layer3.2.conv2.offset_l.weight, layer3.2.conv2.offset_l.bias, layer3.3.conv2.weight_diff, layer3.3.conv2.switch.weight, layer3.3.conv2.switch.bias, layer3.3.conv2.pre_context.weight, layer3.3.conv2.pre_context.bias, layer3.3.conv2.post_context.weight, layer3.3.conv2.post_context.bias, layer3.3.conv2.offset_s.weight, layer3.3.conv2.offset_s.bias, layer3.3.conv2.offset_l.weight, layer3.3.conv2.offset_l.bias, layer3.4.conv2.weight_diff, layer3.4.conv2.switch.weight, layer3.4.conv2.switch.bias, layer3.4.conv2.pre_context.weight, layer3.4.conv2.pre_context.bias, layer3.4.conv2.post_context.weight, layer3.4.conv2.post_context.bias, layer3.4.conv2.offset_s.weight, layer3.4.conv2.offset_s.bias, layer3.4.conv2.offset_l.weight, layer3.4.conv2.offset_l.bias, layer3.5.conv2.weight_diff, layer3.5.conv2.switch.weight, layer3.5.conv2.switch.bias, layer3.5.conv2.pre_context.weight, layer3.5.conv2.pre_context.bias, layer3.5.conv2.post_context.weight, layer3.5.conv2.post_context.bias, layer3.5.conv2.offset_s.weight, layer3.5.conv2.offset_s.bias, layer3.5.conv2.offset_l.weight, layer3.5.conv2.offset_l.bias, layer4.0.conv2.weight_diff, layer4.0.conv2.switch.weight, layer4.0.conv2.switch.bias, layer4.0.conv2.pre_context.weight, layer4.0.conv2.pre_context.bias, layer4.0.conv2.post_context.weight, layer4.0.conv2.post_context.bias, layer4.0.conv2.offset_s.weight, layer4.0.conv2.offset_s.bias, layer4.0.conv2.offset_l.weight, layer4.0.conv2.offset_l.bias, layer4.1.conv2.weight_diff, layer4.1.conv2.switch.weight, layer4.1.conv2.switch.bias, layer4.1.conv2.pre_context.weight, layer4.1.conv2.pre_context.bias, layer4.1.conv2.post_context.weight, layer4.1.conv2.post_context.bias, layer4.1.conv2.offset_s.weight, layer4.1.conv2.offset_s.bias, layer4.1.conv2.offset_l.weight, layer4.1.conv2.offset_l.bias, layer4.2.conv2.weight_diff, layer4.2.conv2.switch.weight, layer4.2.conv2.switch.bias, layer4.2.conv2.pre_context.weight, layer4.2.conv2.pre_context.bias, layer4.2.conv2.post_context.weight, layer4.2.conv2.post_context.bias, layer4.2.conv2.offset_s.weight, layer4.2.conv2.offset_s.bias, layer4.2.conv2.offset_l.weight, layer4.2.conv2.offset_l.bias

2020-11-17 12:56:22,769 - mmdet - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: fc.weight, fc.bias

missing keys in source state_dict: layer2.0.conv2.weight_diff, layer2.0.conv2.switch.weight, layer2.0.conv2.switch.bias, layer2.0.conv2.pre_context.weight, layer2.0.conv2.pre_context.bias, layer2.0.conv2.post_context.weight, layer2.0.conv2.post_context.bias, layer2.0.conv2.offset_s.weight, layer2.0.conv2.offset_s.bias, layer2.0.conv2.offset_l.weight, layer2.0.conv2.offset_l.bias, layer2.0.rfp_conv.weight, layer2.0.rfp_conv.bias, layer2.1.conv2.weight_diff, layer2.1.conv2.switch.weight, layer2.1.conv2.switch.bias, layer2.1.conv2.pre_context.weight, layer2.1.conv2.pre_context.bias, layer2.1.conv2.post_context.weight, layer2.1.conv2.post_context.bias, layer2.1.conv2.offset_s.weight, layer2.1.conv2.offset_s.bias, layer2.1.conv2.offset_l.weight, layer2.1.conv2.offset_l.bias, layer2.2.conv2.weight_diff, layer2.2.conv2.switch.weight, layer2.2.conv2.switch.bias, layer2.2.conv2.pre_context.weight, layer2.2.conv2.pre_context.bias, layer2.2.conv2.post_context.weight, layer2.2.conv2.post_context.bias, layer2.2.conv2.offset_s.weight, layer2.2.conv2.offset_s.bias, layer2.2.conv2.offset_l.weight, layer2.2.conv2.offset_l.bias, layer2.3.conv2.weight_diff, layer2.3.conv2.switch.weight, layer2.3.conv2.switch.bias, layer2.3.conv2.pre_context.weight, layer2.3.conv2.pre_context.bias, layer2.3.conv2.post_context.weight, layer2.3.conv2.post_context.bias, layer2.3.conv2.offset_s.weight, layer2.3.conv2.offset_s.bias, layer2.3.conv2.offset_l.weight, layer2.3.conv2.offset_l.bias, layer3.0.conv2.weight_diff, layer3.0.conv2.switch.weight, layer3.0.conv2.switch.bias, layer3.0.conv2.pre_context.weight, layer3.0.conv2.pre_context.bias, layer3.0.conv2.post_context.weight, layer3.0.conv2.post_context.bias, layer3.0.conv2.offset_s.weight, layer3.0.conv2.offset_s.bias, layer3.0.conv2.offset_l.weight, layer3.0.conv2.offset_l.bias, layer3.0.rfp_conv.weight, layer3.0.rfp_conv.bias, layer3.1.conv2.weight_diff, layer3.1.conv2.switch.weight, layer3.1.conv2.switch.bias, layer3.1.conv2.pre_context.weight, layer3.1.conv2.pre_context.bias, layer3.1.conv2.post_context.weight, layer3.1.conv2.post_context.bias, layer3.1.conv2.offset_s.weight, layer3.1.conv2.offset_s.bias, layer3.1.conv2.offset_l.weight, layer3.1.conv2.offset_l.bias, layer3.2.conv2.weight_diff, layer3.2.conv2.switch.weight, layer3.2.conv2.switch.bias, layer3.2.conv2.pre_context.weight, layer3.2.conv2.pre_context.bias, layer3.2.conv2.post_context.weight, layer3.2.conv2.post_context.bias, layer3.2.conv2.offset_s.weight, layer3.2.conv2.offset_s.bias, layer3.2.conv2.offset_l.weight, layer3.2.conv2.offset_l.bias, layer3.3.conv2.weight_diff, layer3.3.conv2.switch.weight, layer3.3.conv2.switch.bias, layer3.3.conv2.pre_context.weight, layer3.3.conv2.pre_context.bias, layer3.3.conv2.post_context.weight, layer3.3.conv2.post_context.bias, layer3.3.conv2.offset_s.weight, layer3.3.conv2.offset_s.bias, layer3.3.conv2.offset_l.weight, layer3.3.conv2.offset_l.bias, layer3.4.conv2.weight_diff, layer3.4.conv2.switch.weight, layer3.4.conv2.switch.bias, layer3.4.conv2.pre_context.weight, layer3.4.conv2.pre_context.bias, layer3.4.conv2.post_context.weight, layer3.4.conv2.post_context.bias, layer3.4.conv2.offset_s.weight, layer3.4.conv2.offset_s.bias, layer3.4.conv2.offset_l.weight, layer3.4.conv2.offset_l.bias, layer3.5.conv2.weight_diff, layer3.5.conv2.switch.weight, layer3.5.conv2.switch.bias, layer3.5.conv2.pre_context.weight, layer3.5.conv2.pre_context.bias, layer3.5.conv2.post_context.weight, layer3.5.conv2.post_context.bias, layer3.5.conv2.offset_s.weight, layer3.5.conv2.offset_s.bias, layer3.5.conv2.offset_l.weight, layer3.5.conv2.offset_l.bias, layer4.0.conv2.weight_diff, layer4.0.conv2.switch.weight, layer4.0.conv2.switch.bias, layer4.0.conv2.pre_context.weight, layer4.0.conv2.pre_context.bias, layer4.0.conv2.post_context.weight, layer4.0.conv2.post_context.bias, layer4.0.conv2.offset_s.weight, layer4.0.conv2.offset_s.bias, layer4.0.conv2.offset_l.weight, layer4.0.conv2.offset_l.bias, layer4.0.rfp_conv.weight, layer4.0.rfp_conv.bias, layer4.1.conv2.weight_diff, layer4.1.conv2.switch.weight, layer4.1.conv2.switch.bias, layer4.1.conv2.pre_context.weight, layer4.1.conv2.pre_context.bias, layer4.1.conv2.post_context.weight, layer4.1.conv2.post_context.bias, layer4.1.conv2.offset_s.weight, layer4.1.conv2.offset_s.bias, layer4.1.conv2.offset_l.weight, layer4.1.conv2.offset_l.bias, layer4.2.conv2.weight_diff, layer4.2.conv2.switch.weight, layer4.2.conv2.switch.bias, layer4.2.conv2.pre_context.weight, layer4.2.conv2.pre_context.bias, layer4.2.conv2.post_context.weight, layer4.2.conv2.post_context.bias, layer4.2.conv2.offset_s.weight, layer4.2.conv2.offset_s.bias, layer4.2.conv2.offset_l.weight, layer4.2.conv2.offset_l.bias

('1', '2', '3', '4', '5', '6', '7')
2020-11-17 12:56:23,268 - mmdet - INFO - load checkpoint from /content/mmdetection/checkpoints/detectors_htc_r50_1x_coco-329b1453.pth
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
2020-11-17 12:56:23,784 - mmdet - WARNING - The model and loaded state dict do not match exactly

size mismatch for roi_head.bbox_head.0.fc_cls.weight: copying a param with shape torch.Size([81, 1024]) from checkpoint, the shape in current model is torch.Size([8, 1024]).
size mismatch for roi_head.bbox_head.0.fc_cls.bias: copying a param with shape torch.Size([81]) from checkpoint, the shape in current model is torch.Size([8]).
size mismatch for roi_head.bbox_head.1.fc_cls.weight: copying a param with shape torch.Size([81, 1024]) from checkpoint, the shape in current model is torch.Size([8, 1024]).
size mismatch for roi_head.bbox_head.1.fc_cls.bias: copying a param with shape torch.Size([81]) from checkpoint, the shape in current model is torch.Size([8]).
size mismatch for roi_head.bbox_head.2.fc_cls.weight: copying a param with shape torch.Size([81, 1024]) from checkpoint, the shape in current model is torch.Size([8, 1024]).
size mismatch for roi_head.bbox_head.2.fc_cls.bias: copying a param with shape torch.Size([81]) from checkpoint, the shape in current model is torch.Size([8]).
2020-11-17 12:56:23,796 - mmdet - INFO - Start running, host: root@42fd9cfac423, work_dir: /content/tutorial_exps
2020-11-17 12:56:23,797 - mmdet - INFO - workflow: [('train', 1)], max: 100 epochs
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-110-e26cf4febde3> in <module>()
     21 # Create work_dir
     22 mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
---> 23 train_detector(model, datasets, cfg, distributed=False, validate=True)

8 frames
/content/mmdetection/mmdet/datasets/samplers/group_sampler.py in __iter__(self)
     34                 [indice, np.random.choice(indice, num_extra)])
     35             indices.append(indice)
---> 36             print(indice)
     37         indices = np.concatenate(indices)
     38         indices = [

<__array_function__ internals> in concatenate(*args, **kwargs)

ValueError: need at least one array to concatenate

I also read a lot on the issue but I can't catch the key idea to deal with this problem. I also check my annotations and it was ok (I train with other object detector and it still fine). Here is the annotations file:

{
  "annotations": [
    {
      "area": 342,
      "bbox": [
        880,
        333,
        19,
        18
      ],
      "category_id": 2,
      "id": 0,
      "image_id": 3,
      "iscrowd": 0,
      "segmentation": []
    },
    {
      "area": 255,
      "bbox": [
        781,
        337,
        17,
        15
      ],
      "category_id": 6,
      "id": 3,
      "image_id": 6,
      "iscrowd": 0,
      "segmentation": []
    },
CuongNN218 commented 3 years ago

@itruonghai maybe the name of your classes in your dataset should match the name of classes in config file.

itruonghai commented 3 years ago

@cuong12ly1 thank you. It was fix. But now, I face up with other error:

IndexError: Caught IndexError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
    data = fetcher.fetch(index)
  File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/content/mmdetection/mmdet/datasets/custom.py", line 191, in __getitem__
    data = self.prepare_train_img(idx)
  File "/content/mmdetection/mmdet/datasets/custom.py", line 214, in prepare_train_img
    return self.pipeline(results)
  File "/content/mmdetection/mmdet/datasets/pipelines/compose.py", line 40, in __call__
    data = t(data)
  File "/content/mmdetection/mmdet/datasets/pipelines/loading.py", line 371, in __call__
    results = self._load_masks(results)
  File "/content/mmdetection/mmdet/datasets/pipelines/loading.py", line 323, in _load_masks
    [self._poly2mask(mask, h, w) for mask in gt_masks], h, w)
  File "/content/mmdetection/mmdet/datasets/pipelines/loading.py", line 323, in <listcomp>
    [self._poly2mask(mask, h, w) for mask in gt_masks], h, w)
  File "/content/mmdetection/mmdet/datasets/pipelines/loading.py", line 279, in _poly2mask
    rles = maskUtils.frPyObjects(mask_ann, img_h, img_w)
  File "pycocotools/_mask.pyx", line 292, in pycocotools._mask.frPyObjects
IndexError: list index out of range

I just want to detection only, not segmentation. And the error is about mask. I also try to delete all of "segment head" but it was not work. Please help me.

v-qjqs commented 3 years ago

Hi, I notice you didn't change all the 'num_classes' to your own dataset. The 'num_classes' in 'HTCMaskHead' and 'FusedSemanticHead' is defalutly COCO setting while you didn't change to your own dataset setting.

itruonghai commented 3 years ago

Hi, I notice you didn't change all the 'num_classes' to your own dataset. The 'num_classes' in 'HTCMaskHead' and 'FusedSemanticHead' is defalutly COCO setting while you didn't change to your own dataset setting.

Because I think that I just train the detector, not train the segmentation?

v-qjqs commented 3 years ago

So I guess the semantic segmentation related properties should not be included in your config file.