VDIGPKU / CBNetV2

[TIP 2022] CBNetV2: A Composite Backbone Network Architecture for Object Detection
Apache License 2.0
373 stars 68 forks source link

FileNotFoundError: [Errno 2] No such file or directory: 'data/coco/stuffthingmaps/train2017/xyz.png' #27

Open Resham-Sundar opened 3 years ago

Resham-Sundar commented 3 years ago

I have searched related issues but cannot get the expected help.

I want to train my custom dataset for instance segmentation using Improved HTC with DB-Swin-L as backbone. But I am facing the above error. Since it is an instance segmentation dataset, I don't have stuffthingmaps. Kindly help me as to how should I go about it.

I get the following upon training on Google colab:

2021-08-03 18:28:25,774 - mmdet - INFO - Environment info:

sys.platform: linux Python: 3.7.11 (default, Jul 3 2021, 18:01:19) [GCC 7.5.0] CUDA available: True GPU 0: Tesla T4 CUDA_HOME: /usr/local/cuda NVCC: Build cuda_11.0_bu.TC445_37.28845127_0 GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 PyTorch: 1.9.0+cu102 PyTorch compiling details: PyTorch built with:

TorchVision: 0.10.0+cu102 OpenCV: 4.1.2 MMCV: 1.3.9 MMCV Compiler: GCC 7.5 MMCV CUDA Compiler: 11.0 MMDetection: 2.14.0+900f7bd

2021-08-03 18:28:26,338 - mmdet - INFO - Distributed training: False 2021-08-03 18:28:26,893 - mmdet - INFO - Config: model = dict( type='HybridTaskCascade', pretrained=None, backbone=dict( type='CBSwinTransformer', embed_dim=192, depths=[2, 2, 18, 2], num_heads=[6, 12, 24, 48], window_size=7, mlp_ratio=4.0, qkv_bias=True, qk_scale=None, drop_rate=0.0, attn_drop_rate=0.0, drop_path_rate=0.2, ape=False, patch_norm=True, out_indices=(0, 1, 2, 3), use_checkpoint=False), neck=dict( type='CBFPN', in_channels=[192, 384, 768, 1536], out_channels=256, num_outs=5), rpn_head=dict( type='RPNHead', in_channels=256, feat_channels=256, anchor_generator=dict( type='AnchorGenerator', scales=[8], ratios=[0.5, 1.0, 2.0], strides=[4, 8, 16, 32, 64]), bbox_coder=dict( type='DeltaXYWHBBoxCoder', target_means=[0.0, 0.0, 0.0, 0.0], target_stds=[1.0, 1.0, 1.0, 1.0]), loss_cls=dict( type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), loss_bbox=dict( type='SmoothL1Loss', beta=0.1111111111111111, loss_weight=1.0)), roi_head=dict( type='HybridTaskCascadeRoIHead', interleaved=True, mask_info_flow=True, num_stages=3, stage_loss_weights=[1, 0.5, 0.25], bbox_roi_extractor=dict( type='SingleRoIExtractor', roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), out_channels=256, featmap_strides=[4, 8, 16, 32]), bbox_head=[ dict( type='Shared2FCBBoxHead', in_channels=256, fc_out_channels=1024, roi_feat_size=7, num_classes=3, bbox_coder=dict( type='DeltaXYWHBBoxCoder', target_means=[0.0, 0.0, 0.0, 0.0], target_stds=[0.1, 0.1, 0.2, 0.2]), reg_class_agnostic=True, loss_cls=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)), dict( type='Shared2FCBBoxHead', in_channels=256, fc_out_channels=1024, roi_feat_size=7, num_classes=3, bbox_coder=dict( type='DeltaXYWHBBoxCoder', target_means=[0.0, 0.0, 0.0, 0.0], target_stds=[0.05, 0.05, 0.1, 0.1]), reg_class_agnostic=True, loss_cls=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)), dict( type='Shared2FCBBoxHead', in_channels=256, fc_out_channels=1024, roi_feat_size=7, num_classes=3, bbox_coder=dict( type='DeltaXYWHBBoxCoder', target_means=[0.0, 0.0, 0.0, 0.0], target_stds=[0.033, 0.033, 0.067, 0.067]), reg_class_agnostic=True, loss_cls=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)) ], mask_roi_extractor=dict( type='SingleRoIExtractor', roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), out_channels=256, featmap_strides=[4, 8, 16, 32]), mask_head=[ dict( type='HTCMaskHead', with_conv_res=False, num_convs=4, in_channels=256, conv_out_channels=256, num_classes=3, loss_mask=dict( type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)), dict( type='HTCMaskHead', num_convs=4, in_channels=256, conv_out_channels=256, num_classes=3, loss_mask=dict( type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)), dict( type='HTCMaskHead', num_convs=4, in_channels=256, conv_out_channels=256, num_classes=3, loss_mask=dict( type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)) ], semantic_roi_extractor=dict( type='SingleRoIExtractor', roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), out_channels=256, featmap_strides=[8]), semantic_head=dict( type='FusedSemanticHead', num_ins=5, fusion_level=1, num_convs=4, in_channels=256, conv_out_channels=256, num_classes=183, ignore_label=255, loss_weight=0.2)), train_cfg=dict( rpn=dict( assigner=dict( type='MaxIoUAssigner', pos_iou_thr=0.7, neg_iou_thr=0.3, min_pos_iou=0.3, ignore_iof_thr=-1), sampler=dict( type='RandomSampler', num=256, pos_fraction=0.5, neg_pos_ub=-1, add_gt_as_proposals=False), allowed_border=0, pos_weight=-1, debug=False), rpn_proposal=dict( nms_pre=2000, max_per_img=2000, nms=dict(type='nms', iou_threshold=0.7), min_bbox_size=0), rcnn=[ dict( assigner=dict( type='MaxIoUAssigner', pos_iou_thr=0.5, neg_iou_thr=0.5, min_pos_iou=0.5, ignore_iof_thr=-1), sampler=dict( type='RandomSampler', num=512, pos_fraction=0.25, neg_pos_ub=-1, add_gt_as_proposals=True), mask_size=28, pos_weight=-1, debug=False), dict( assigner=dict( type='MaxIoUAssigner', pos_iou_thr=0.6, neg_iou_thr=0.6, min_pos_iou=0.6, ignore_iof_thr=-1), sampler=dict( type='RandomSampler', num=512, pos_fraction=0.25, neg_pos_ub=-1, add_gt_as_proposals=True), mask_size=28, pos_weight=-1, debug=False), dict( assigner=dict( type='MaxIoUAssigner', pos_iou_thr=0.7, neg_iou_thr=0.7, min_pos_iou=0.7, ignore_iof_thr=-1), sampler=dict( type='RandomSampler', num=512, pos_fraction=0.25, neg_pos_ub=-1, add_gt_as_proposals=True), mask_size=28, pos_weight=-1, debug=False) ]), test_cfg=dict( rpn=dict( nms_pre=1000, max_per_img=1000, nms=dict(type='nms', iou_threshold=0.7), min_bbox_size=0), rcnn=dict( score_thr=0.001, nms=dict(type='soft_nms', iou_threshold=0.5), max_per_img=100, mask_thr_binary=0.5))) dataset_type = 'COCODataset' data_root = 'data/coco/' img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) train_pipeline = [ dict(type='LoadImageFromFile'), dict( type='LoadAnnotations', with_bbox=True, with_mask=True, with_seg=True), dict( type='Resize', img_scale=[(1600, 400), (1600, 1400)], multiscale_mode='range', keep_ratio=True), dict(type='RandomFlip', flip_ratio=0.5), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='SegRescale', scale_factor=0.125), dict(type='DefaultFormatBundle'), dict( type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks', 'gt_semantic_seg']) ] test_pipeline = [ dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(1600, 1400), flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img']) ]) ] data = dict( samples_per_gpu=1, workers_per_gpu=1, train=dict( type='CocoDataset', ann_file='data/trainval.json', img_prefix='data/images/', pipeline=[ dict(type='LoadImageFromFile'), dict( type='LoadAnnotations', with_bbox=True, with_mask=True, with_seg=True), dict( type='Resize', img_scale=[(1600, 400), (1600, 1400)], multiscale_mode='range', keep_ratio=True), dict(type='RandomFlip', flip_ratio=0.5), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='SegRescale', scale_factor=0.125), dict(type='DefaultFormatBundle'), dict( type='Collect', keys=[ 'img', 'gt_bboxes', 'gt_labels', 'gt_masks', 'gt_semantic_seg' ]) ], seg_prefix='data/coco/stuffthingmaps/train2017/', classes=('date', 'fig', 'hazelnut')), val=dict( type='CocoDataset', ann_file='data/trainval.json', img_prefix='data/images/', pipeline=[ dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(1600, 1400), flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img']) ]) ], classes=('date', 'fig', 'hazelnut')), test=dict( type='CocoDataset', ann_file='data/trainval.json', img_prefix='data/images/', pipeline=[ dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(1600, 1400), flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img']) ]) ], classes=('date', 'fig', 'hazelnut'))) evaluation = dict(metric=['bbox', 'segm']) optimizer = dict( type='AdamW', lr=5e-05, betas=(0.9, 0.999), weight_decay=0.05, paramwise_cfg=dict( custom_keys=dict( absolute_pos_embed=dict(decay_mult=0.0), relative_position_bias_table=dict(decay_mult=0.0), norm=dict(decay_mult=0.0)))) optimizer_config = dict(grad_clip=None) lr_config = dict( policy='step', warmup='linear', warmup_iters=500, warmup_ratio=0.001, step=[8, 11]) runner = dict(type='EpochBasedRunner', max_epochs=12) checkpoint_config = dict(interval=1) log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')]) custom_hooks = [dict(type='NumClassCheckHook')] dist_params = dict(backend='nccl') log_level = 'INFO' load_from = 'htc_cbv2_swin_large22k_patch4_window7_mstrain_400-1400_giou_4conv1f_adamw_1x_coco.pth' resume_from = None workflow = [('train', 1)] samples_per_gpu = 1 classes = ('date', 'fig', 'hazelnut') work_dir = './work_dirs/nuts' gpu_ids = range(0, 1)

/content/CBNetV2/mmdet/core/anchor/builder.py:16: UserWarning: build_anchor_generator would be deprecated soon, please use build_prior_generator 'build_anchor_generator would be deprecated soon, please use ' loading annotations into memory... Done (t=0.01s) creating index... index created! loading annotations into memory... Done (t=0.00s) creating index... index created! 2021-08-03 18:28:38,797 - mmdet - INFO - load checkpoint from htc_cbv2_swin_large22k_patch4_window7_mstrain_400-1400_giou_4conv1f_adamw_1x_coco.pth 2021-08-03 18:28:38,798 - mmdet - INFO - Use load_from_local loader 2021-08-03 18:29:29,361 - mmdet - WARNING - The model and loaded state dict do not match exactly

size mismatch for roi_head.bbox_head.0.fc_cls.weight: copying a param with shape torch.Size([81, 1024]) from checkpoint, the shape in current model is torch.Size([4, 1024]). size mismatch for roi_head.bbox_head.0.fc_cls.bias: copying a param with shape torch.Size([81]) from checkpoint, the shape in current model is torch.Size([4]). size mismatch for roi_head.bbox_head.1.fc_cls.weight: copying a param with shape torch.Size([81, 1024]) from checkpoint, the shape in current model is torch.Size([4, 1024]). size mismatch for roi_head.bbox_head.1.fc_cls.bias: copying a param with shape torch.Size([81]) from checkpoint, the shape in current model is torch.Size([4]). size mismatch for roi_head.bbox_head.2.fc_cls.weight: copying a param with shape torch.Size([81, 1024]) from checkpoint, the shape in current model is torch.Size([4, 1024]). size mismatch for roi_head.bbox_head.2.fc_cls.bias: copying a param with shape torch.Size([81]) from checkpoint, the shape in current model is torch.Size([4]). size mismatch for roi_head.mask_head.0.conv_logits.weight: copying a param with shape torch.Size([80, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 256, 1, 1]). size mismatch for roi_head.mask_head.0.conv_logits.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([3]). size mismatch for roi_head.mask_head.1.conv_logits.weight: copying a param with shape torch.Size([80, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 256, 1, 1]). size mismatch for roi_head.mask_head.1.conv_logits.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([3]). size mismatch for roi_head.mask_head.2.conv_logits.weight: copying a param with shape torch.Size([80, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 256, 1, 1]). size mismatch for roi_head.mask_head.2.conv_logits.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([3]). unexpected key in source state_dict: roi_head.bbox_head.0.shared_convs.0.conv.weight, roi_head.bbox_head.0.shared_convs.0.bn.weight, roi_head.bbox_head.0.shared_convs.0.bn.bias, roi_head.bbox_head.0.shared_convs.0.bn.running_mean, roi_head.bbox_head.0.shared_convs.0.bn.running_var, roi_head.bbox_head.0.shared_convs.0.bn.num_batches_tracked, roi_head.bbox_head.0.shared_convs.1.conv.weight, roi_head.bbox_head.0.shared_convs.1.bn.weight, roi_head.bbox_head.0.shared_convs.1.bn.bias, roi_head.bbox_head.0.shared_convs.1.bn.running_mean, roi_head.bbox_head.0.shared_convs.1.bn.running_var, roi_head.bbox_head.0.shared_convs.1.bn.num_batches_tracked, roi_head.bbox_head.0.shared_convs.2.conv.weight, roi_head.bbox_head.0.shared_convs.2.bn.weight, roi_head.bbox_head.0.shared_convs.2.bn.bias, roi_head.bbox_head.0.shared_convs.2.bn.running_mean, roi_head.bbox_head.0.shared_convs.2.bn.running_var, roi_head.bbox_head.0.shared_convs.2.bn.num_batches_tracked, roi_head.bbox_head.0.shared_convs.3.conv.weight, roi_head.bbox_head.0.shared_convs.3.bn.weight, roi_head.bbox_head.0.shared_convs.3.bn.bias, roi_head.bbox_head.0.shared_convs.3.bn.running_mean, roi_head.bbox_head.0.shared_convs.3.bn.running_var, roi_head.bbox_head.0.shared_convs.3.bn.num_batches_tracked, roi_head.bbox_head.1.shared_convs.0.conv.weight, roi_head.bbox_head.1.shared_convs.0.bn.weight, roi_head.bbox_head.1.shared_convs.0.bn.bias, roi_head.bbox_head.1.shared_convs.0.bn.running_mean, roi_head.bbox_head.1.shared_convs.0.bn.running_var, roi_head.bbox_head.1.shared_convs.0.bn.num_batches_tracked, roi_head.bbox_head.1.shared_convs.1.conv.weight, roi_head.bbox_head.1.shared_convs.1.bn.weight, roi_head.bbox_head.1.shared_convs.1.bn.bias, roi_head.bbox_head.1.shared_convs.1.bn.running_mean, roi_head.bbox_head.1.shared_convs.1.bn.running_var, roi_head.bbox_head.1.shared_convs.1.bn.num_batches_tracked, roi_head.bbox_head.1.shared_convs.2.conv.weight, roi_head.bbox_head.1.shared_convs.2.bn.weight, roi_head.bbox_head.1.shared_convs.2.bn.bias, roi_head.bbox_head.1.shared_convs.2.bn.running_mean, roi_head.bbox_head.1.shared_convs.2.bn.running_var, roi_head.bbox_head.1.shared_convs.2.bn.num_batches_tracked, roi_head.bbox_head.1.shared_convs.3.conv.weight, roi_head.bbox_head.1.shared_convs.3.bn.weight, roi_head.bbox_head.1.shared_convs.3.bn.bias, roi_head.bbox_head.1.shared_convs.3.bn.running_mean, roi_head.bbox_head.1.shared_convs.3.bn.running_var, roi_head.bbox_head.1.shared_convs.3.bn.num_batches_tracked, roi_head.bbox_head.2.shared_convs.0.conv.weight, roi_head.bbox_head.2.shared_convs.0.bn.weight, roi_head.bbox_head.2.shared_convs.0.bn.bias, roi_head.bbox_head.2.shared_convs.0.bn.running_mean, roi_head.bbox_head.2.shared_convs.0.bn.running_var, roi_head.bbox_head.2.shared_convs.0.bn.num_batches_tracked, roi_head.bbox_head.2.shared_convs.1.conv.weight, roi_head.bbox_head.2.shared_convs.1.bn.weight, roi_head.bbox_head.2.shared_convs.1.bn.bias, roi_head.bbox_head.2.shared_convs.1.bn.running_mean, roi_head.bbox_head.2.shared_convs.1.bn.running_var, roi_head.bbox_head.2.shared_convs.1.bn.num_batches_tracked, roi_head.bbox_head.2.shared_convs.2.conv.weight, roi_head.bbox_head.2.shared_convs.2.bn.weight, roi_head.bbox_head.2.shared_convs.2.bn.bias, roi_head.bbox_head.2.shared_convs.2.bn.running_mean, roi_head.bbox_head.2.shared_convs.2.bn.running_var, roi_head.bbox_head.2.shared_convs.2.bn.num_batches_tracked, roi_head.bbox_head.2.shared_convs.3.conv.weight, roi_head.bbox_head.2.shared_convs.3.bn.weight, roi_head.bbox_head.2.shared_convs.3.bn.bias, roi_head.bbox_head.2.shared_convs.3.bn.running_mean, roi_head.bbox_head.2.shared_convs.3.bn.running_var, roi_head.bbox_head.2.shared_convs.3.bn.num_batches_tracked

missing keys in source state_dict: roi_head.bbox_head.0.shared_fcs.1.weight, roi_head.bbox_head.0.shared_fcs.1.bias, roi_head.bbox_head.1.shared_fcs.1.weight, roi_head.bbox_head.1.shared_fcs.1.bias, roi_head.bbox_head.2.shared_fcs.1.weight, roi_head.bbox_head.2.shared_fcs.1.bias

2021-08-03 18:29:29,409 - mmdet - INFO - Start running, host: root@d8f8e57ec13b, work_dir: /content/CBNetV2/work_dirs/nuts 2021-08-03 18:29:29,409 - mmdet - INFO - Hooks will be executed in the following order: before_run: (VERY_HIGH ) StepLrUpdaterHook
(NORMAL ) CheckpointHook
(NORMAL ) EvalHook
(VERY_LOW ) TextLoggerHook


before_train_epoch: (VERY_HIGH ) StepLrUpdaterHook
(NORMAL ) EvalHook
(NORMAL ) NumClassCheckHook
(LOW ) IterTimerHook
(VERY_LOW ) TextLoggerHook


before_train_iter: (VERY_HIGH ) StepLrUpdaterHook
(NORMAL ) EvalHook
(LOW ) IterTimerHook


after_train_iter: (ABOVE_NORMAL) OptimizerHook
(NORMAL ) CheckpointHook
(NORMAL ) EvalHook
(LOW ) IterTimerHook
(VERY_LOW ) TextLoggerHook


after_train_epoch: (NORMAL ) CheckpointHook
(NORMAL ) EvalHook
(VERY_LOW ) TextLoggerHook


before_val_epoch: (NORMAL ) NumClassCheckHook
(LOW ) IterTimerHook
(VERY_LOW ) TextLoggerHook


before_val_iter: (LOW ) IterTimerHook


after_val_iter: (LOW ) IterTimerHook


after_val_epoch: (VERY_LOW ) TextLoggerHook


2021-08-03 18:29:29,409 - mmdet - INFO - workflow: [('train', 1)], max: 12 epochs Traceback (most recent call last): File "tools/train.py", line 188, in main() File "tools/train.py", line 184, in main meta=meta) File "/content/CBNetV2/mmdet/apis/train.py", line 185, in train_detector runner.run(data_loaders, cfg.workflow) File "/usr/local/lib/python3.7/dist-packages/mmcv/runner/epoch_based_runner.py", line 127, in run epoch_runner(data_loaders[i], **kwargs) File "/usr/local/lib/python3.7/dist-packages/mmcv/runner/epoch_based_runner.py", line 47, in train for i, data_batch in enumerate(self.data_loader): File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 521, in next data = self._next_data() File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1203, in _next_data return self._process_data(data) File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1229, in _process_data data.reraise() File "/usr/local/lib/python3.7/dist-packages/torch/_utils.py", line 425, in reraise raise self.exc_type(msg) FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/content/CBNetV2/mmdet/datasets/custom.py", line 194, in getitem data = self.prepare_train_img(idx) File "/content/CBNetV2/mmdet/datasets/custom.py", line 217, in prepare_train_img return self.pipeline(results) File "/content/CBNetV2/mmdet/datasets/pipelines/compose.py", line 40, in call data = t(data) File "/content/CBNetV2/mmdet/datasets/pipelines/loading.py", line 373, in call results = self._load_semantic_seg(results) File "/content/CBNetV2/mmdet/datasets/pipelines/loading.py", line 347, in _load_semantic_seg img_bytes = self.file_client.get(filename) File "/usr/local/lib/python3.7/dist-packages/mmcv/fileio/file_client.py", line 306, in get return self.client.get(filepath) File "/usr/local/lib/python3.7/dist-packages/mmcv/fileio/file_client.py", line 184, in get with open(filepath, 'rb') as f: FileNotFoundError: [Errno 2] No such file or directory: 'data/coco/stuffthingmaps/train2017/11.png'

My config file: base = '../cbnet/htc_cbv2_swin_large_patch4_window7_mstrain_400-1400_giou_4conv1f_adamw_1x_coco.py'

model = dict( roi_head=dict( bbox_head=[ dict( type='ConvFCBBoxHead', num_shared_convs=4, num_shared_fcs=1, in_channels=256, conv_out_channels=256, fc_out_channels=1024, roi_feat_size=7, num_classes=3, bbox_coder=dict( type='DeltaXYWHBBoxCoder', target_means=[0., 0., 0., 0.], target_stds=[0.1, 0.1, 0.2, 0.2]), reg_class_agnostic=True, reg_decoded_bbox=True, norm_cfg=dict(type='SyncBN', requires_grad=True), loss_cls=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), loss_bbox=dict(type='GIoULoss', loss_weight=10.0)), dict( type='ConvFCBBoxHead', num_shared_convs=4, num_shared_fcs=1, in_channels=256, conv_out_channels=256, fc_out_channels=1024, roi_feat_size=7, num_classes=3, bbox_coder=dict( type='DeltaXYWHBBoxCoder', target_means=[0., 0., 0., 0.], target_stds=[0.05, 0.05, 0.1, 0.1]), reg_class_agnostic=True, reg_decoded_bbox=True, norm_cfg=dict(type='SyncBN', requires_grad=True), loss_cls=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), loss_bbox=dict(type='GIoULoss', loss_weight=10.0)), dict( type='ConvFCBBoxHead', num_shared_convs=4, num_shared_fcs=1, in_channels=256, conv_out_channels=256, fc_out_channels=1024, roi_feat_size=7, num_classes=3, bbox_coder=dict( type='DeltaXYWHBBoxCoder', target_means=[0., 0., 0., 0.], target_stds=[0.033, 0.033, 0.067, 0.067]), reg_class_agnostic=True, reg_decoded_bbox=True, norm_cfg=dict(type='SyncBN', requires_grad=True), loss_cls=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), loss_bbox=dict(type='GIoULoss', loss_weight=10.0)) ] ) )

model = dict( type='HybridTaskCascade', pretrained=None, roi_head=dict( type='HybridTaskCascadeRoIHead', interleaved=True, mask_info_flow=True, num_stages=3, stage_loss_weights=[1, 0.5, 0.25], bbox_roi_extractor=dict( type='SingleRoIExtractor', roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), out_channels=256, featmap_strides=[4, 8, 16, 32]), bbox_head=[ dict( type='Shared2FCBBoxHead', in_channels=256, fc_out_channels=1024, roi_feat_size=7, num_classes=3, bbox_coder=dict( type='DeltaXYWHBBoxCoder', target_means=[0., 0., 0., 0.], target_stds=[0.1, 0.1, 0.2, 0.2]), reg_class_agnostic=True, loss_cls=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)), dict( type='Shared2FCBBoxHead', in_channels=256, fc_out_channels=1024, roi_feat_size=7, num_classes=3, bbox_coder=dict( type='DeltaXYWHBBoxCoder', target_means=[0., 0., 0., 0.], target_stds=[0.05, 0.05, 0.1, 0.1]), reg_class_agnostic=True, loss_cls=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)), dict( type='Shared2FCBBoxHead', in_channels=256, fc_out_channels=1024, roi_feat_size=7, num_classes=3, bbox_coder=dict( type='DeltaXYWHBBoxCoder', target_means=[0., 0., 0., 0.], target_stds=[0.033, 0.033, 0.067, 0.067]), reg_class_agnostic=True, loss_cls=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)) ], mask_roi_extractor=dict( type='SingleRoIExtractor', roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), out_channels=256, featmap_strides=[4, 8, 16, 32]), mask_head=[ dict( type='HTCMaskHead', with_conv_res=False, num_convs=4, in_channels=256, conv_out_channels=256, num_classes=3, loss_mask=dict( type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)), dict( type='HTCMaskHead', num_convs=4, in_channels=256, conv_out_channels=256, num_classes=3, loss_mask=dict( type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)), dict( type='HTCMaskHead', num_convs=4, in_channels=256, conv_out_channels=256, num_classes=3, loss_mask=dict( type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)) ], ))

dataset_type = 'COCODataset' classes = ('date','fig','hazelnut',) data = dict( train=dict( img_prefix='data/images/', classes=classes, ann_file='data/trainval.json'), val=dict( img_prefix='data/images/', classes=classes, ann_file='data/trainval.json'), test=dict( img_prefix='data/images/', classes=classes, ann_file='data/trainval.json'))

load_from = 'htc_cbv2_swin_large22k_patch4_window7_mstrain_400-1400_giou_4conv1f_adamw_1x_coco.pth'

mangoyuan commented 3 years ago

Want to train HTC with DB-Swin-B in custom dataset without instance segmentation masks. Is there any solution?

yestaehyung commented 3 years ago

Want to train HTC with DB-Swin-B in custom dataset without instance segmentation masks. Is there any solution?

I have same problems. (sorry for my english is not good)

In my case i changed htc_cbv2_swin_base_patch4_window7_mstrain_400-1400_adamw_20e_coco.py there are data root and seg_prefix, you must change these things to your local

ttrungtin2910 commented 2 years ago

Want to train HTC with DB-Swin-B in custom dataset without instance segmentation masks. Is there any solution?

Do you have a solution? Please help me