FishYuLi / BalancedGroupSoftmax

CVPR 2020 oral paper: Overcoming Classifier Imbalance for Long-tail Object Detection with Balanced Group Softmax.
Apache License 2.0
352 stars 64 forks source link

dist_train not working #2

Closed Ugness closed 4 years ago

Ugness commented 4 years ago

I tried to train your model with 2 Titan Xp GPUS, but I got an error. It was okay to train your model with a single GPU with train.py. I just modified the pretrained model directory in the config file.

With Python 3.6.9 torch 1.3.1 mmcv 0.2.14 torchvision 0.4.2. mmdet 1.0.rc0

This is my error message

Traceback (most recent call last):
  File "./tools/train.py", line 169, in <module>
    main()
  File "./tools/train.py", line 165, in main
    logger=logger)
  File "/home/wogns98/BalancedGroupSoftmax/mmdet/apis/train.py", line 58, in train_detector
    _dist_train(model, dataset, cfg, validate=validate)
  File "/home/wogns98/BalancedGroupSoftmax/mmdet/apis/train.py", line 205, in _dist_train
    runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
  File "/usr/local/lib/python3.6/dist-packages/mmcv/runner/runner.py", line 358, in run
    epoch_runner(data_loaders[i], **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/mmcv/runner/runner.py", line 260, in train
    for i, data_batch in enumerate(data_loader):
  File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 278, in __iter__
    return _MultiProcessingDataLoaderIter(self)
  File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 682, in __init__
    w.start()
  File "/usr/lib/python3.6/multiprocessing/process.py", line 105, in start
    self._popen = self._Popen(self)
  File "/usr/lib/python3.6/multiprocessing/context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "/usr/lib/python3.6/multiprocessing/context.py", line 284, in _Popen
    return Popen(process_obj)
  File "/usr/lib/python3.6/multiprocessing/popen_spawn_posix.py", line 32, in __init__
    super().__init__(process_obj)
  File "/usr/lib/python3.6/multiprocessing/popen_fork.py", line 19, in __init__
    self._launch(process_obj)
  File "/usr/lib/python3.6/multiprocessing/popen_spawn_posix.py", line 47, in _launch
    reduction.dump(process_obj, fp)
  File "/usr/lib/python3.6/multiprocessing/reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle _thread.RLock objects
Traceback (most recent call last):
  File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py", line 253, in <module>
    main()
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py", line 249, in main
    cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python3', '-u', './tools/train.py', '--local_rank=1', 'configs/bags/gs_mask_rcnn_r50_fpn_1x_lvis.py', '--launcher', 'pytorch']' returned non-zero exit status 1.

Please give me some advice. Thank you.

Chauncy-Cai commented 3 years ago

I am facing the same problem, and could you tell me how to solve it?

Ugness commented 3 years ago

This repo is using the old version of mmdetection (https://github.com/open-mmlab/mmdetection). I've solved this problem by porting this repo to the latest version of mmdetection.

I think it would be better to find similar issues at https://github.com/open-mmlab/mmdetection.