open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.21k stars 9.4k forks source link

Out of memory bus error during training with docker #1997

Closed ChutianXu0922 closed 4 years ago

ChutianXu0922 commented 4 years ago

Problems description

Hello! When I wanted to do the training with the following code:

sudo docker run --rm -it -v $(pwd):/$(pwd) -w $(pwd) --name cascademask cascademask python tools/train.py configs/mask_rcnn_r50_fpn_1x.py

a RuntimeError happened as below:

RuntimeError: DataLoader worker (pid 37) is killed by signal: Bus error. It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.

However, as I checked my server, it still had much plenty memory as the project was under training. I am confused why this error happened with free memory. I am very grateful for any help.

The details of my nvidia-smi is: image

Traceback

2020-01-17 07:32:35,054 - INFO - Distributed training: False 2020-01-17 07:32:35,054 - INFO - MMDetection Version: 1.0rc1+e032ebb 2020-01-17 07:32:35,054 - INFO - Config: model settings model = dict( type='MaskRCNN', pretrained='torchvision://resnet50', backbone=dict( type='ResNet', depth=50, num_stages=4, out_indices=(0, 1, 2, 3), frozen_stages=1, style='pytorch'), neck=dict( type='FPN', in_channels=[256, 512, 1024, 2048], out_channels=256, num_outs=5), rpn_head=dict( type='RPNHead', in_channels=256, feat_channels=256, anchor_scales=[8], anchor_ratios=[0.5, 1.0, 2.0], anchor_strides=[4, 8, 16, 32, 64], target_means=[.0, .0, .0, .0], target_stds=[1.0, 1.0, 1.0, 1.0], loss_cls=dict( type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)), bbox_roi_extractor=dict( type='SingleRoIExtractor', roi_layer=dict(type='RoIAlign', out_size=7, sample_num=2), out_channels=256, featmap_strides=[4, 8, 16, 32]), bbox_head=dict( type='SharedFCBBoxHead', num_fcs=2, in_channels=256, fc_out_channels=1024, roi_feat_size=7, num_classes=81, target_means=[0., 0., 0., 0.], target_stds=[0.1, 0.1, 0.2, 0.2], reg_class_agnostic=False, loss_cls=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)), mask_roi_extractor=dict( type='SingleRoIExtractor', roi_layer=dict(type='RoIAlign', out_size=14, sample_num=2), out_channels=256, featmap_strides=[4, 8, 16, 32]), mask_head=dict( type='FCNMaskHead', num_convs=4, in_channels=256, conv_out_channels=256, num_classes=81, loss_mask=dict( type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))) model training and testing settings train_cfg = dict( rpn=dict( assigner=dict( type='MaxIoUAssigner', pos_iou_thr=0.7, neg_iou_thr=0.3, min_pos_iou=0.3, ignore_iof_thr=-1), sampler=dict( type='RandomSampler', num=256, pos_fraction=0.5, neg_pos_ub=-1, add_gt_as_proposals=False), allowed_border=0, pos_weight=-1, debug=False), rpn_proposal=dict( nms_across_levels=False, nms_pre=2000, nms_post=2000, max_num=2000, nms_thr=0.7, min_bbox_size=0), rcnn=dict( assigner=dict( type='MaxIoUAssigner', pos_iou_thr=0.5, neg_iou_thr=0.5, min_pos_iou=0.5, ignore_iof_thr=-1), sampler=dict( type='RandomSampler', num=512, pos_fraction=0.25, neg_pos_ub=-1, add_gt_as_proposals=True), mask_size=28, pos_weight=-1, debug=False)) test_cfg = dict( rpn=dict( nms_across_levels=False, nms_pre=1000, nms_post=1000, max_num=1000, nms_thr=0.7, min_bbox_size=0), rcnn=dict( score_thr=0.05, nms=dict(type='nms', iou_thr=0.5), max_per_img=100, mask_thr_binary=0.5)) dataset settings dataset_type = 'CocoDataset' data_root = 'data/coco/' img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', with_bbox=True, with_mask=True), dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), dict(type='RandomFlip', flip_ratio=0.5), dict(type='Normalize', img_norm_cfg), dict(type='Pad', size_divisor=32), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), ] test_pipeline = [ dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(1333, 800), flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict(type='Normalize', img_norm_cfg), dict(type='Pad', size_divisor=32), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img']), ]) ] data = dict( imgs_per_gpu=2, workers_per_gpu=2, train=dict( type=dataset_type, ann_file=data_root + 'annotations/instances_train2017.json', img_prefix=data_root + 'train2017/', pipeline=train_pipeline), val=dict( type=dataset_type, ann_file=data_root + 'annotations/instances_val2017.json', img_prefix=data_root + 'val2017/', pipeline=test_pipeline), test=dict( type=dataset_type, ann_file=data_root + 'annotations/instances_val2017.json', img_prefix=data_root + 'val2017/', pipeline=test_pipeline)) optimizer optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001) optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2)) learning policy lr_config = dict( policy='step', warmup='linear', warmup_iters=500, warmup_ratio=1.0 / 3, step=[8, 11]) checkpoint_config = dict(interval=1) yapf:disable log_config = dict( interval=50, hooks=[ dict(type='TextLoggerHook'),

dict(type='TensorboardLoggerHook')

])

yapf:enable evaluation = dict(interval=1) runtime settings total_epochs = 12 dist_params = dict(backend='nccl') log_level = 'INFO' work_dir = './work_dirs/mask_rcnn_r50_fpn_1x' load_from = None resume_from = None workflow = [('train', 1)]

2020-01-17 07:32:35,466 - INFO - load model from: torchvision://resnet50 Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to /root/.cache/torch/checkpoints/resnet50-19c8e357.pth 100%|###################################################| 97.8M/97.8M [00:08<00:00, 11.6MB/s] 2020-01-17 07:32:44,674 - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: fc.weight, fc.bias

loading annotations into memory... Done (t=17.53s) creating index... index created! 2020-01-17 07:33:11,633 - INFO - Start running, host: root@0798a02fad8d, work_dir: /home/ChutianXu0922/instance/cascademask/work_dirs/mask_rcnn_r50_fpn_1x 2020-01-17 07:33:11,633 - INFO - workflow: [('train', 1)], max: 12 epochs ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm). Traceback (most recent call last): File "tools/train.py", line 124, in main() File "tools/train.py", line 120, in main timestamp=timestamp) File "/mmdetection/mmdet/apis/train.py", line 133, in train_detector timestamp=timestamp) File "/mmdetection/mmdet/apis/train.py", line 319, in _non_dist_train runner.run(data_loaders, cfg.workflow, cfg.total_epochs) File "/opt/conda/lib/python3.6/site-packages/mmcv/runner/runner.py", line 364, in run epoch_runner(data_loaders[i], kwargs) File "/opt/conda/lib/python3.6/site-packages/mmcv/runner/runner.py", line 268, in train self.model, data_batch, train_mode=True, kwargs) File "/mmdetection/mmdet/apis/train.py", line 100, in batch_processor losses = model(data) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(*input, *kwargs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward return self.module(inputs[0], kwargs[0]) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(*input, kwargs) File "/mmdetection/mmdet/core/fp16/decorators.py", line 49, in new_func return old_func(args, kwargs) File "/mmdetection/mmdet/models/detectors/base.py", line 138, in forward return self.forward_train(img, img_meta, kwargs) File "/mmdetection/mmdet/models/detectors/two_stage.py", line 166, in forward_train x = self.extract_feat(img) File "/mmdetection/mmdet/models/detectors/two_stage.py", line 92, in extract_feat x = self.backbone(img) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(input, kwargs) File "/mmdetection/mmdet/models/backbones/resnet.py", line 522, in forward x = self.conv1(x) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 345, in forward return self.conv2d_forward(input, self.weight) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 342, in conv2d_forward self.padding, self.dilation, self.groups) File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler _error_if_any_worker_fails() RuntimeError: DataLoader worker (pid 37) is killed by signal: Bus error. It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.

xpngzhng commented 4 years ago

Add option --ipc=host when running nvidia-docker run. https://docs.nvidia.com/deeplearning/frameworks/user-guide/index.html#setincshmem

ChutianXu0922 commented 4 years ago

Add option --ipc=host when running nvidia-docker run. https://docs.nvidia.com/deeplearning/frameworks/user-guide/index.html#setincshmem

It works! Thank you very much!