shenyunhang / DRN-WSOD-pytorch

Enabling Deep Residual Networks for Weakly Supervised Object Detection
https://github.com/shenyunhang/DRN-WSOD-pytorch/tree/DRN-WSOD/projects/WSL
Apache License 2.0
50 stars 10 forks source link

Can't train with multiple GPUs #4

Closed Enrit0 closed 3 years ago

Enrit0 commented 3 years ago

Hi. Thanks for your work. The code works in single GPU training but when I try to run in multiple GPUs mode I got an error.

The command I run:

python3 projects/WSL/tools/train_net.py --num-gpus 2 --config-file projects/WSL/configs/PascalVOC-Detection/oicr_WSR_101_DC5_1x.yaml OUTPUT_DIR output/oicr_WSR_101_DC5_VOC07_`date +'%Y-%m-%d_%H-%M-%S'`

The error message:

-- Process 1 terminated with the following error:
Traceback (most recent call last):
  File "/home/anaconda3/envs/detectron2/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
    fn(i, *args)
  File "/home/anaconda3/envs/detectron2/lib/python3.6/site-packages/detectron2/engine/launch.py", line 94, in _distributed_worker
    main_func(*args)
  File "/home/projects/DRN-WSOD-pytorch/projects/WSL/tools/train_net.py", line 243, in main
    return trainer.train()
  File "/home/anaconda3/envs/detectron2/lib/python3.6/site-packages/detectron2/engine/defaults.py", line 399, in train
    super().train(self.start_iter, self.max_iter)
  File "/home/anaconda3/envs/detectron2/lib/python3.6/site-packages/detectron2/engine/train_loop.py", line 140, in train
    self.run_step()
  File "/home/projects/DRN-WSOD-pytorch/projects/WSL/tools/train_net.py", line 88, in run_step
    loss_dict = self.model(data)
  File "/home/anaconda3/envs/detectron2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/anaconda3/envs/detectron2/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 528, in forward
    self.reducer.prepare_for_backward([])
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).

Do you have any suggestions?

Enrit0 commented 3 years ago

Yes the log. I will also check your log.

shenyunhang commented 3 years ago

Ok, I will update a new training log later.

Enrit0 commented 3 years ago

Hi, Just an update on this issue -- Re-installing and re-building helps me solve it. Thanks.