WXinlong / SOLO

SOLO and SOLOv2 for instance segmentation, ECCV 2020 & NeurIPS 2020.
Other
1.69k stars 307 forks source link

how 2 use coco and convert my dataset 2 coco , can anybody give a detail tutorial? much thx. #37

Open bigbigxing823 opened 4 years ago

WXinlong commented 4 years ago

@Chen823 Please refer to GETTING_STARTED.md for details.

bigbigxing823 commented 4 years ago

@Chen823 Please refer to GETTING_STARTED.md for details. @WXinlong thanks for your prompt reply, i get it.

but now i am in another confuze : i trained custom COCO data set with ./tools/dist_train.shconfigs/solo/decoupled_solo_light_dcn_r50_fpn_8gpu_3x.py 4 and successfully got weights files like this ,

weight文件

and here is my train log:

“”“” 2020-04-29 11:42:54,670 - mmdet - INFO - Distributed training: True

2020-04-29 11:42:54,670 - mmdet - INFO - MMDetection Version: 1.0.0+d5398a0

2020-04-29 11:42:54,671 - mmdet - INFO - Config:

model settings

model = dict( type='SOLO', pretrained='torchvision://resnet50', backbone=dict( type='ResNet', depth=50, num_stages=4, out_indices=(0, 1, 2, 3), # C2, C3, C4, C5 frozen_stages=1, style='pytorch', dcn=dict( type='DCN', deformable_groups=1, fallback_on_stride=False), stage_with_dcn=(False, True, True, True)), neck=dict( type='FPN', in_channels=[256, 512, 1024, 2048], out_channels=256, start_level=0, num_outs=5), bbox_head=dict( type='DecoupledSOLOLightHead', num_classes=2, # 1+1 in_channels=256, stacked_convs=4, use_dcn_in_tower=True, type_dcn='DCN', seg_feat_channels=256, strides=[8, 8, 16, 32, 32], scale_ranges=((1, 64), (32, 128), (64, 256), (128, 512), (256, 2048)), sigma=0.2, num_grids=[40, 36, 24, 16, 12], cate_down_pos=0, loss_ins=dict( type='DiceLoss', use_sigmoid=True, loss_weight=3.0), loss_cate=dict( type='FocalLoss', use_sigmoid=True, gamma=2.0, alpha=0.25, loss_weight=1.0), ))

training and testing settings

train_cfg = dict() test_cfg = dict( nms_pre=500, score_thr=0.1, mask_thr=0.5, update_thr=0.05, kernel='gaussian', # gaussian/linear sigma=2.0, max_per_img=100)

dataset settings

dataset_type = 'CocoDataset' data_root = 'data/coco/' img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', with_bbox=True, with_mask=True), dict(type='Resize', img_scale=[(852, 512), (852, 480), (852, 448), (852, 416), (852, 384), (852, 352)], multiscale_mode='value', keep_ratio=True), dict(type='RandomFlip', flip_ratio=0.5), dict(type='Normalize', img_norm_cfg), dict(type='Pad', size_divisor=32), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), ] test_pipeline = [ dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(852, 512), flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict(type='Normalize', img_norm_cfg), dict(type='Pad', size_divisor=32), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img']), ]) ] data = dict( imgs_per_gpu=2, workers_per_gpu=2, train=dict( type=dataset_type, ann_file=data_root + 'annotations/instances_train2017.json', img_prefix=data_root + 'train2017/', pipeline=train_pipeline), val=dict( type=dataset_type, ann_file=data_root + 'annotations/instances_val2017.json', img_prefix=data_root + 'test2017/', pipeline=test_pipeline), test=dict( type=dataset_type, ann_file=data_root + 'annotations/instances_val2017.json',

img_prefix=data_root + 'val2017/',

    img_prefix=data_root + 'test2017/',
    pipeline=test_pipeline))

optimizer

optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001) optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))

learning policy

lr_config = dict( policy='step', warmup='linear', warmup_iters=500, warmup_ratio=1.0 / 3, step=[27, 33]) checkpoint_config = dict(interval=1)

yapf:disable

log_config = dict( interval=50, hooks=[ dict(type='TextLoggerHook'),

dict(type='TensorboardLoggerHook')

])

yapf:enable

runtime settings

total_epochs = 80 device_ids = range(8) dist_params = dict(backend='nccl') log_level = 'INFO' work_dir = './work_dirs/decoupled_solo_light_dcn_release_r50_fpn_8gpu_3x' load_from = None resume_from = None workflow = [('train', 1)]

2020-04-29 11:42:55,303 - mmdet - INFO - load model from: torchvision://resnet50 2020-04-29 11:42:59,583 - mmdet - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: fc.weight, fc.bias

missing keys in source state_dict: layer2.0.conv2.conv_offset.weight, layer2.0.conv2.conv_offset.bias, layer2.1.conv2.conv_offset.weight, layer2.1.conv2.conv_offset.bias, layer2.2.conv2.conv_offset.weight, layer2.2.conv2.conv_offset.bias, layer2.3.conv2.conv_offset.weight, layer2.3.conv2.conv_offset.bias, layer3.0.conv2.conv_offset.weight, layer3.0.conv2.conv_offset.bias, layer3.1.conv2.conv_offset.weight, layer3.1.conv2.conv_offset.bias, layer3.2.conv2.conv_offset.weight, layer3.2.conv2.conv_offset.bias, layer3.3.conv2.conv_offset.weight, layer3.3.conv2.conv_offset.bias, layer3.4.conv2.conv_offset.weight, layer3.4.conv2.conv_offset.bias, layer3.5.conv2.conv_offset.weight, layer3.5.conv2.conv_offset.bias, layer4.0.conv2.conv_offset.weight, layer4.0.conv2.conv_offset.bias, layer4.1.conv2.conv_offset.weight, layer4.1.conv2.conv_offset.bias, layer4.2.conv2.conv_offset.weight, layer4.2.conv2.conv_offset.bias

2020-04-29 11:42:59,827 - mmdet - INFO - Start running, host: adt@adt-SA5212M5, work_dir: /home/adt/Documents/alg/big-xing/SOLO/work_dirs/decoupled_solo_light_dcn_release_r50_fpn_8gpu_3x 2020-04-29 11:42:59,828 - mmdet - INFO - workflow: [('train', 1)], max: 80 epochs “”“”

after that, i wanna test and visualize my result with

""" ./tools/dist_test.sh configs/solo/decoupled_solo_light_dcn_r50_fpn_8gpu_3x.py ./work_dirs/decoupled_solo_light_dcn_release_r50_fpn_8gpu_3x/epoch_80.pth 4 --show --out results_solo.pkl --eval segm """

and

""" python tools/test_ins_vis.py configs/solo/decoupled_solo_light_dcn_r50_fpn_8gpu_3x.py ./work_dirs/decoupled_solo_light_dcn_release_r50_fpn_8gpu_3x/latest.pth --show --save_dir work_dirs/vis_solo """

, test result is :

""" loading annotations into memory... Done (t=0.00s) creating index... index created! loading annotations into memory... loading annotations into memory... Done (t=0.00s) creating index... index created! Done (t=0.00s) creating index... index created! loading annotations into memory... Done (t=0.00s) creating index... index created! [ ] 0/2, elapsed: 0s, ETA:/home/adt/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/functional.py:2539: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode)) [>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 3/2, 1.8 task/s, elapsed: 2s, ETA: [>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 4/2, 2.4 task/s, elapsed: 2s, ETA: 0s/home/adt/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/functional.py:2539: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode)) /home/adt/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/functional.py:2539: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode)) /home/adt/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/functional.py:2539: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode))

writing results to results_solo.pkl Starting evaluate segm result file is:results_solo.pkl.segm.json Loading and preparing results... Traceback (most recent call last): File "./tools/test_ins.py", line 257, in main() File "./tools/test_ins.py", line 235, in main coco_eval(result_files, eval_types, dataset.coco) File "/home/adt/Documents/alg/big-xing/SOLO/mmdet/core/evaluation/coco_utils.py", line 42, in coco_eval coco_dets = coco.loadRes(result_file) File "/home/adt/anaconda3/envs/pytorch/lib/python3.6/site-packages/pycocotools/coco.py", line 326, in loadRes if 'caption' in anns[0]: IndexError: list index out of range Traceback (most recent call last): File "/home/adt/anaconda3/envs/pytorch/lib/python3.6/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/adt/anaconda3/envs/pytorch/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/adt/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/distributed/launch.py", line 235, in main() File "/home/adt/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/distributed/launch.py", line 231, in main cmd=process.args) subprocess.CalledProcessError: Command '['/home/adt/anaconda3/envs/pytorch/bin/python', '-u', './tools/test_ins.py', '--local_rank=0', 'configs/solo/decoupled_solo_light_dcn_r50_fpn_8gpu_3x.py', './work_dirs/decoupled_solo_light_dcn_release_r50_fpn_8gpu_3x/epoch_80.pth', '--launcher', 'pytorch', '--show', '--out', 'results_solo.pkl', '--eval', 'segm']' returned non-zero exit status 1. """

meanwhile, visualize result is :

there is no result images generated in "./word_dirs/vis_solo".

could u give me some further advises to this problem? thank u.