chenbinghui1 / DSL

CVPR2022 paper "Dense Learning based Semi-Supervised Object Detection"
Apache License 2.0
100 stars 10 forks source link

RuntimeError: Address already in use #28

Open Lost-little-dinosaur opened 1 year ago

Lost-little-dinosaur commented 1 year ago

大佬们好,我折腾了一个月,终于在WSL的ubuntu18.04.5上配好了环境 但是它在运行 image readme里的这条语句时,运行了一段时间然后报错如下:

*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
Traceback (most recent call last):
  File "./tools/train.py", line 202, in <module>
    main()
  File "./tools/train.py", line 120, in main
    init_dist(args.launcher, **cfg.dist_params)
  File "/usr/local/lib/python3.8/dist-packages/mmcv/runner/dist_utils.py", line 18, in init_dist
    _init_dist_pytorch(backend, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/mmcv/runner/dist_utils.py", line 35, in _init_dist_pytorch
    dist.init_process_group(backend=backend, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py", line 500, in init_process_group
    store, rank, world_size = next(rendezvous_iterator)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/rendezvous.py", line 190, in _env_rendezvous_handler
    store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)
RuntimeError: Address already in use
Traceback (most recent call last):
  File "./tools/train.py", line 202, in <module>
    main()
  File "./tools/train.py", line 120, in main
    init_dist(args.launcher, **cfg.dist_params)
  File "/usr/local/lib/python3.8/dist-packages/mmcv/runner/dist_utils.py", line 18, in init_dist
    _init_dist_pytorch(backend, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/mmcv/runner/dist_utils.py", line 35, in _init_dist_pytorch
    dist.init_process_group(backend=backend, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py", line 500, in init_process_group
    store, rank, world_size = next(rendezvous_iterator)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/rendezvous.py", line 190, in _env_rendezvous_handler
    store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)
RuntimeError: Address already in use
Traceback (most recent call last):
  File "./tools/train.py", line 202, in <module>
    main()
  File "./tools/train.py", line 120, in main
    init_dist(args.launcher, **cfg.dist_params)
  File "/usr/local/lib/python3.8/dist-packages/mmcv/runner/dist_utils.py", line 18, in init_dist
    _init_dist_pytorch(backend, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/mmcv/runner/dist_utils.py", line 35, in _init_dist_pytorch
    dist.init_process_group(backend=backend, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py", line 500, in init_process_group
    store, rank, world_size = next(rendezvous_iterator)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/rendezvous.py", line 190, in _env_rendezvous_handler
    store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)
RuntimeError: Address already in use
Traceback (most recent call last):
  File "./tools/train.py", line 202, in <module>
    main()
  File "./tools/train.py", line 120, in main
    init_dist(args.launcher, **cfg.dist_params)
  File "/usr/local/lib/python3.8/dist-packages/mmcv/runner/dist_utils.py", line 18, in init_dist
    _init_dist_pytorch(backend, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/mmcv/runner/dist_utils.py", line 35, in _init_dist_pytorch
    dist.init_process_group(backend=backend, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py", line 500, in init_process_group
    store, rank, world_size = next(rendezvous_iterator)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/rendezvous.py", line 190, in _env_rendezvous_handler
    store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)
RuntimeError: Address already in use
Traceback (most recent call last):
  File "./tools/train.py", line 202, in <module>
    main()
  File "./tools/train.py", line 120, in main
    init_dist(args.launcher, **cfg.dist_params)
  File "/usr/local/lib/python3.8/dist-packages/mmcv/runner/dist_utils.py", line 18, in init_dist
    _init_dist_pytorch(backend, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/mmcv/runner/dist_utils.py", line 35, in _init_dist_pytorch
    dist.init_process_group(backend=backend, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py", line 500, in init_process_group
    store, rank, world_size = next(rendezvous_iterator)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/rendezvous.py", line 190, in _env_rendezvous_handler
    store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)
RuntimeError: Address already in use
Traceback (most recent call last):
  File "./tools/train.py", line 202, in <module>
    main()
  File "./tools/train.py", line 120, in main
    init_dist(args.launcher, **cfg.dist_params)
  File "/usr/local/lib/python3.8/dist-packages/mmcv/runner/dist_utils.py", line 18, in init_dist
    _init_dist_pytorch(backend, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/mmcv/runner/dist_utils.py", line 35, in _init_dist_pytorch
    dist.init_process_group(backend=backend, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py", line 500, in init_process_group
    store, rank, world_size = next(rendezvous_iterator)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/rendezvous.py", line 190, in _env_rendezvous_handler
    store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)
RuntimeError: Address already in use
Killing subprocess 1941
Killing subprocess 1942
Killing subprocess 1943
Killing subprocess 1944
Killing subprocess 1945
Killing subprocess 1946
Killing subprocess 1947
Killing subprocess 1948
Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 192, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 340, in <module>
    main()
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 326, in main
    sigkill_handler(signal.SIGTERM, None)  # not coming back
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 301, in sigkill_handler
    raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python', '-u', './tools/train.py', '--local_rank=7', 'configs/fcos_semi/r50_caffe_mslonger_tricks_0.Xdata.py', '--launcher', 'pytorch', '--work-dir', 'workdir_coco/r50_caffe_mslonger_tricks_0.1data']' returned non-zero exit status 1.

主要是 RuntimeError: Address already in use 报错 然后我尝试运行了它报错的/usr/bin/python -u ./tools/train.py --local_rank=7 configs/fcos_semi/r50_caffe_mslonger_tricks_0.Xdata.py --launcher pytorch --work-dir workdir_coco/r50_caffe_mslonger_tricks_0.1data这条命令,发现是可以运行的,我去网上搜了一下,应该是pytorch分布式在单机多任务时使用了GPU的同一个端口而报错,然后我就修改了所有DSL项目中的master_port参数,如下 image 但是还是报错RuntimeError: Address already in use,,,,,好折磨啊555 大佬求教一教,已经给了star

Lost-little-dinosaur commented 1 year ago

然后我之前运行项目的时候还报错说要设置几个环境变量:RANK、MASTER_PORT、MASTER_ADDR、WORLD_SIZE 我也不知道设置什么我就随便设了 RANK=9 MASTER_PORT=127.0.0.1 MASTER_ADDR=12139 WORLD_SIZE=2 不知道对不对(这个教程里也没讲到要设置成什么样)

chenbinghui1 commented 1 year ago

address already in use 的问题,解决方案: 【1】先nvidia-smi -l 看一下 把其他的全部程序关掉,再跑就可以;有时候前面出错的代码停掉后进程并没有关掉,显存并没有释放; 【2】基于【1】把脚本里面的port改一下 随便改一下 https://github.com/chenbinghui1/DSL/blob/4c2e6f3c5ffcc18ed874061f054da9779a2f736d/demo/model_train/baseline_coco.sh#L14

注:就是port用掉了 ,注意事项就是代码 不能同时运行再一个port上,改下port以及注意把没有关掉的程序关掉

chenbinghui1 commented 1 year ago

然后我之前运行项目的时候还报错说要设置几个环境变量:RANK、MASTER_PORT、MASTER_ADDR、WORLD_SIZE 我也不知道设置什么我就随便设了 RANK=9 MASTER_PORT=127.0.0.1 MASTER_ADDR=12139 WORLD_SIZE=2 不知道对不对(这个教程里也没讲到要设置成什么样)

几张卡设置rank为几,同时检查&修改上面提到的脚本里面CUDA_VISIBLE_DEVICES

Lost-little-dinosaur commented 1 year ago

谢谢您,这确实解决了一些报错,但是它又出现了新的报错...... 还是当我在运行这条命令sudo sh ./demo/model_train/baseline_coco.sh的时候,它出现了如下的报错:

root@ASUS:/home/dinosaur/src/python/myDSL/DSL# sudo sh ./demo/model_train/baseline_coco.sh

2022-12-24 18:07:45,238 - mmdet - INFO - Environment info:
------------------------------------------------------------
sys.platform: linux
Python: 3.8.0 (default, Dec  9 2021, 17:53:27) [GCC 8.4.0]
CUDA available: True
GPU 0: NVIDIA GeForce GTX 1660 Ti
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 10.2, V10.2.89
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.8.2+cu102
PyTorch compiling details: PyTorch built with:
  - GCC 7.3
  - C++ Version: 201402
  - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v1.7.0 (Git Hash 7aed236906b1f7a05c0917e5257a1af05e9ff683)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 10.2
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70
  - CuDNN 7.6.5
  - Magma 2.5.2
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=10.2, CUDNN_VERSION=7.6.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.8.2, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,

TorchVision: 0.9.2+cu102
OpenCV: 4.6.0
MMCV: 1.3.17
MMCV Compiler: GCC 7.5
MMCV CUDA Compiler: 10.2
MMDetection: 2.14.0+4c2e6f3
------------------------------------------------------------

2022-12-24 18:07:45,730 - mmdet - INFO - Distributed training: True
2022-12-24 18:07:46,298 - mmdet - INFO - Config:
model = dict(
    type='FCOS',
    backbone=dict(
        type='ResNet',
        depth=50,
        num_stages=4,
        out_indices=(0, 1, 2, 3),
        frozen_stages=1,
        norm_cfg=dict(type='BN', requires_grad=False),
        norm_eval=True,
        style='caffe',
        init_cfg=dict(
            type='Pretrained',
            checkpoint='open-mmlab://detectron2/resnet50_caffe')),
    neck=dict(
        type='FPN',
        in_channels=[256, 512, 1024, 2048],
        out_channels=256,
        start_level=1,
        add_extra_convs='on_output',
        num_outs=5,
        relu_before_extra_convs=True),
    bbox_head=dict(
        type='FCOSHead',
        num_classes=80,
        in_channels=256,
        stacked_convs=4,
        feat_channels=256,
        strides=[8, 16, 32, 64, 128],
        norm_on_bbox=True,
        centerness_on_reg=True,
        dcn_on_last_conv=False,
        center_sampling=True,
        conv_bias=True,
        loss_cls=dict(
            type='FocalLoss',
            use_sigmoid=True,
            gamma=2.0,
            alpha=0.25,
            loss_weight=1.0),
        loss_bbox=dict(type='GIoULoss', loss_weight=1.0),
        loss_centerness=dict(
            type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0)),
    train_cfg=dict(
        assigner=dict(
            type='MaxIoUAssigner',
            pos_iou_thr=0.5,
            neg_iou_thr=0.4,
            min_pos_iou=0,
            ignore_iof_thr=-1),
        allowed_border=-1,
        pos_weight=-1,
        debug=False),
    test_cfg=dict(
        nms_pre=1000,
        min_bbox_size=0,
        score_thr=0.05,
        nms=dict(type='nms', iou_threshold=0.5),
        max_per_img=100))
img_norm_cfg = dict(
    mean=[103.53, 116.28, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
train_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(type='LoadAnnotations', with_bbox=True),
    dict(
        type='Resize',
        img_scale=[(1333, 640), (1333, 800)],
        multiscale_mode='value',
        keep_ratio=True),
    dict(type='RandomFlip', flip_ratio=0.5),
    dict(
        type='Normalize',
        mean=[103.53, 116.28, 123.675],
        std=[1.0, 1.0, 1.0],
        to_rgb=False),
    dict(type='Pad', size_divisor=32),
    dict(type='DefaultFormatBundle'),
    dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
]
test_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(
        type='MultiScaleFlipAug',
        img_scale=(1333, 800),
        flip=False,
        transforms=[
            dict(type='Resize', keep_ratio=True),
            dict(type='RandomFlip'),
            dict(
                type='Normalize',
                mean=[103.53, 116.28, 123.675],
                std=[1.0, 1.0, 1.0],
                to_rgb=False),
            dict(type='Pad', size_divisor=32),
            dict(type='ImageToTensor', keys=['img']),
            dict(type='Collect', keys=['img'])
        ])
]
dataset_type = 'CocoDataset'
data_root = '/gruntdata1/bhchen/factory/data/semicoco/'
data = dict(
    samples_per_gpu=2,
    workers_per_gpu=2,
    train=dict(
        type='CocoDataset',
        ann_file=
        'data_list/coco_semi/semi_supervised/instances_train2017.2@10.json',
        img_prefix='/gruntdata1/bhchen/factory/data/semicoco/images/full/',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(type='LoadAnnotations', with_bbox=True),
            dict(
                type='Resize',
                img_scale=[(1333, 640), (1333, 800)],
                multiscale_mode='value',
                keep_ratio=True),
            dict(type='RandomFlip', flip_ratio=0.5),
            dict(
                type='Normalize',
                mean=[103.53, 116.28, 123.675],
                std=[1.0, 1.0, 1.0],
                to_rgb=False),
            dict(type='Pad', size_divisor=32),
            dict(type='DefaultFormatBundle'),
            dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
        ]),
    val=dict(
        type='CocoDataset',
        ann_file='data_list/coco_semi/semi_supervised/instances_val2017.json',
        img_prefix=
        '/gruntdata1/bhchen/factory/data/semicoco/valid_images/full/',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(
                type='MultiScaleFlipAug',
                img_scale=(1333, 800),
                flip=False,
                transforms=[
                    dict(type='Resize', keep_ratio=True),
                    dict(type='RandomFlip'),
                    dict(
                        type='Normalize',
                        mean=[103.53, 116.28, 123.675],
                        std=[1.0, 1.0, 1.0],
                        to_rgb=False),
                    dict(type='Pad', size_divisor=32),
                    dict(type='ImageToTensor', keys=['img']),
                    dict(type='Collect', keys=['img'])
                ])
        ]),
    test=dict(
        type='CocoDataset',
        ann_file=
        'data_list/coco_semi/semi_supervised/instances_train2017.2@10-unlabeled.json',
        img_prefix='/gruntdata1/bhchen/factory/data/semicoco/images/full/',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(
                type='MultiScaleFlipAug',
                img_scale=(1333, 800),
                flip=False,
                transforms=[
                    dict(type='Resize', keep_ratio=True),
                    dict(type='RandomFlip'),
                    dict(
                        type='Normalize',
                        mean=[103.53, 116.28, 123.675],
                        std=[1.0, 1.0, 1.0],
                        to_rgb=False),
                    dict(type='Pad', size_divisor=32),
                    dict(type='ImageToTensor', keys=['img']),
                    dict(type='Collect', keys=['img'])
                ])
        ]))
optimizer = dict(
    type='SGD',
    lr=0.01,
    momentum=0.9,
    weight_decay=0.0001,
    paramwise_cfg=dict(bias_lr_mult=2.0, bias_decay_mult=0.0))
optimizer_config = dict(grad_clip=None)
lr_config = dict(
    policy='step',
    warmup='linear',
    warmup_iters=500,
    warmup_ratio=0.3333333333333333,
    step=[50, 80])
runner = dict(type='EpochBasedRunner', max_epochs=100)
evaluation = dict(interval=5, metric='bbox')
checkpoint_config = dict(interval=5)
log_config = dict(interval=10, hooks=[dict(type='TextLoggerHook')])
custom_hooks = [dict(type='NumClassCheckHook')]
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resume_from = None
workflow = [('train', 1)]
work_dir = 'workdir_coco/r50_caffe_mslonger_tricks_0.1data'
gpu_ids = range(0, 1)

2022-12-24 18:07:46,534 - mmdet - INFO - initialize ResNet with init_cfg {'type': 'Pretrained', 'checkpoint': 'open-mmlab://detectron2/resnet50_caffe'}
2022-12-24 18:07:46,534 - mmcv - INFO - load model from: open-mmlab://detectron2/resnet50_caffe
2022-12-24 18:07:46,535 - mmcv - INFO - load checkpoint from openmmlab path: open-mmlab://detectron2/resnet50_caffe
2022-12-24 18:07:46,599 - mmcv - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: conv1.bias

2022-12-24 18:07:46,616 - mmdet - INFO - initialize FPN with init_cfg {'type': 'Xavier', 'layer': 'Conv2d', 'distribution': 'uniform'}
2022-12-24 18:07:46,636 - mmdet - INFO - initialize FCOSHead with init_cfg {'type': 'Normal', 'layer': 'Conv2d', 'std': 0.01, 'override': {'type': 'Normal', 'name': 'conv_cls', 'std': 0.01, 'bias_prob': 0.01}}
loading annotations into memory...
Done (t=1.30s)
creating index...
index created!
Traceback (most recent call last):
  File "./tools/train.py", line 202, in <module>
    main()
  File "./tools/train.py", line 190, in main
    train_detector(
  File "/home/dinosaur/src/python/myDSL/DSL/mmdet/apis/train.py", line 92, in train_detector
    model = MMDistributedDataParallel(
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/distributed.py", line 446, in __init__
    self._sync_params_and_buffers(authoritative_rank=0)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/distributed.py", line 457, in _sync_params_and_buffers
    self._distributed_broadcast_coalesced(
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/distributed.py", line 1155, in _distributed_broadcast_coalesced
    dist._broadcast_coalesced(
RuntimeError: NCCL error in: ../torch/lib/c10d/ProcessGroupNCCL.cpp:825, unhandled system error, NCCL version 2.7.8
ncclSystemError: System call (socket, malloc, munmap, etc) failed.
Killing subprocess 3284
Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 192, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 340, in <module>
    main()
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 326, in main
    sigkill_handler(signal.SIGTERM, None)  # not coming back
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 301, in sigkill_handler
    raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python', '-u', './tools/train.py', '--local_rank=0', 'configs/fcos_semi/r50_caffe_mslonger_tricks_0.Xdata.py', '--launcher', 'pytorch', '--work-dir', 'workdir_coco/r50_caffe_mslonger_tricks_0.1data']' returned non-zero exit status 1.

关键在于

Traceback (most recent call last):
  File "./tools/train.py", line 202, in <module>
    main()
  File "./tools/train.py", line 190, in main
    train_detector(
  File "/home/dinosaur/src/python/myDSL/DSL/mmdet/apis/train.py", line 92, in train_detector
    model = MMDistributedDataParallel(
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/distributed.py", line 446, in __init__
    self._sync_params_and_buffers(authoritative_rank=0)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/distributed.py", line 457, in _sync_params_and_buffers
    self._distributed_broadcast_coalesced(
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/distributed.py", line 1155, in _distributed_broadcast_coalesced
    dist._broadcast_coalesced(
RuntimeError: NCCL error in: ../torch/lib/c10d/ProcessGroupNCCL.cpp:825, unhandled system error, NCCL version 2.7.8
ncclSystemError: System call (socket, malloc, munmap, etc) failed.
Killing subprocess 3284

我去网上搜了一下,他们基本上是让我在train命令前加上export参数,如下

export NCCL_IB_DISABLE=1
export NCCL_P2P_DISABLE=1
export NCCL_DEBUG=INFO
export NCCL_SOCKET_IFNAME=eth0

然后我也在 ./tools/dist_train.sh中加上了 image 但是还是报错,, 后来我又搜到可能是NCCL没装好的原因,于是我又重新装了一下它给我报错中的版本NCCL 2.7.8,,结果还是不行.....

网上有说到这个是因为程序把我的显存用完了导致的...这该怎么办呀....我有一张GTX1660Ti显卡的,应该也不算太差了吧...为什么也能用光啊。。网上也没有说具体用光的解决方法,求教.....(我已经把谷歌前三页的教程全看完了都找不到能解决的方法555

chenbinghui1 commented 1 year ago

@Lost-little-dinosaur 一张GTX1660? 好像是有点儿差。可以换张别的卡嘛 至少12G显存的。而且你的export也不对啊 RANK=i 是把 字符i付给了RANK,虽然我不知道这个赋值的用途哈 但是建议你再检查下。

Lost-little-dinosaur commented 1 year ago

唔...好的谢谢您,这个RANK应该是打错了,应该是1不是i,之前您说RANK就是有几张显卡写几嘛,我只有一张卡我就写了1 不过我修改完后还是报一样的错误,我再去试试有显存更高的卡的电脑吧

还是建议可以在README-dependencies中加入至少12G显存显卡的要求.....要不然估计也会有人像我一样配了一个多月的环境然后白配了55555

chenbinghui1 commented 1 year ago

@Lost-little-dinosaur 6G单卡如果不计算时间成本的话 是可以跑的,至少第一步baseline模型是可以的。如果跑起来显存不够,会提示cuda out of memory 而并不是你那个错误提示。你那个错误 我确实没遇到过。可以再查查 或者换个其他计算级显卡1080 2080这种。