ailab-prompt-transfer / TextBox

Implement of PTG
https://github.com/RUCAIBox/TextBox
MIT License
0 stars 0 forks source link

MultiGPU 사용시 에러 #2

Closed minji-o-j closed 1 year ago

minji-o-j commented 1 year ago

실행 명령어

accelerate launch run_textbox.py --model=PTG --dataset=pc --model_path=facebook/bart-large --gpu_id=0,1

에러

15 Jul 06:11    ERROR Traceback (most recent call last):
  File "/workspace/TextBox/textbox/utils/dashboard.py", line 321, in new_experiment
    yield True
  File "/workspace/TextBox/textbox/quick_start/experiment.py", line 128, in run
    self._do_train_and_valid()
  File "/workspace/TextBox/textbox/quick_start/experiment.py", line 105, in _do_train_and_valid
    self.valid_result = self.trainer.fit(train_data, valid_data)
  File "/workspace/TextBox/textbox/trainer/trainer.py", line 423, in fit
    loss = self._train_epoch(train_data, epoch_idx, valid_data)["loss"]
  File "/workspace/TextBox/textbox/trainer/trainer.py", line 208, in _train_epoch
    loss = self.model(data, epoch_idx=epoch_idx)
  File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/opt/conda/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1026, in forward
    if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by 
making sure all `forward` function outputs participate in calculating loss. 
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
Parameter indices which did not receive grad for rank 0: 0
 In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error
minji-o-j commented 1 year ago

find_unused_parameters=true

accelerate launch run_textbox.py --model=PTG --dataset=pc --model_path=facebook/bart-large --gpu_id=0,1 --find_unused_parameters=true