ailab-prompt-transfer / TextBox

Implement of PTG
https://github.com/RUCAIBox/TextBox
MIT License
0 stars 0 forks source link

DDP expects same model across all ranks, but Rank 0 has 10 params, while rank 1 has inconsistent 16 params. #3

Open minji-o-j opened 1 year ago

minji-o-j commented 1 year ago

실험 6번 진행 중 오류 발생

실행 명령어

accelerate launch run_textbox.py --model=PTG --dataset=dd --model_path=facebook/bart-large --gpu_id=0,1 --find_unused_parameters=true

에러 로그

17 Jul 15:35    INFO ====== Finished training, best validation result at train epoch 2 ======
17 Jul 15:35    INFO Best valid result: score: 63.65, <bleu-1: 32.48>, <bleu-2: 31.17>, bleu-3: 33.93, bleu-4: 33.51, distinct-1: 2.82, distinct-2: 10.89, distinct-3: 18.18, distinct-4: 23.90
17 Jul 15:35    INFO Loading model structure and parameters from saved/PTG-dd-2023-Jul-17_12-19-22/checkpoint_best ...
[E ProcessGroupNCCL.cpp:821] [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=12163, OpType=ALLGATHER, Timeout(ms)=1800000) ran for 1802592 milliseconds before timing out.
17 Jul 16:05    ERROR Traceback (most recent call last):
  File "/workspace/TextBox/textbox/utils/dashboard.py", line 321, in new_experiment
    yield True
  File "/workspace/TextBox/textbox/quick_start/experiment.py", line 129, in run
    self._do_test()
  File "/workspace/TextBox/textbox/quick_start/experiment.py", line 110, in _do_test
    self.test_result = self.trainer.evaluate(self.test_data, load_best_model=self.do_train)
  File "/opt/conda/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/workspace/TextBox/textbox/trainer/trainer.py", line 478, in evaluate
    self.model = self.accelerator.prepare(self.model)
  File "/opt/conda/lib/python3.9/site-packages/accelerate/accelerator.py", line 1199, in prepare
    result = tuple(
  File "/opt/conda/lib/python3.9/site-packages/accelerate/accelerator.py", line 1200, in <genexpr>
    self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)
  File "/opt/conda/lib/python3.9/site-packages/accelerate/accelerator.py", line 1027, in _prepare_one
    return self.prepare_model(obj, device_placement=device_placement)
  File "/opt/conda/lib/python3.9/site-packages/accelerate/accelerator.py", line 1295, in prepare_model
    model = torch.nn.parallel.DistributedDataParallel(
  File "/opt/conda/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 655, in __init__
    _verify_param_shape_across_processes(self.process_group, parameters)
  File "/opt/conda/lib/python3.9/site-packages/torch/distributed/utils.py", line 112, in _verify_param_shape_across_processes
    return dist._verify_params_across_processes(process_group, tensors, logger)
RuntimeError: DDP expects same model across all ranks, but Rank 0 has 10 params, while rank 1 has inconsistent 16 params.

[E ProcessGroupNCCL.cpp:456] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[E ProcessGroupNCCL.cpp:461] To avoid data inconsistency, we are taking the entire process down.
terminate called after throwing an instance of 'std::runtime_error'
  what():  [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=12163, OpType=ALLGATHER, Timeout(ms)=1800000) ran for 1802592 milliseconds before timing out.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -6) local_rank: 0 (pid: 932) of binary: /opt/conda/bin/python
Traceback (most recent call last):
  File "/opt/conda/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/opt/conda/lib/python3.9/site-packages/accelerate/commands/accelerate_cli.py", line 45, in main
    args.func(args)
  File "/opt/conda/lib/python3.9/site-packages/accelerate/commands/launch.py", line 950, in launch_command
    multi_gpu_launcher(args)
  File "/opt/conda/lib/python3.9/site-packages/accelerate/commands/launch.py", line 642, in multi_gpu_launcher
    distrib_run.run(args)
  File "/opt/conda/lib/python3.9/site-packages/torch/distributed/run.py", line 753, in run
    elastic_launch(
  File "/opt/conda/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/opt/conda/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
====================================================
run_textbox.py FAILED
----------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
----------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2023-07-17_16:05:27
  host      : 2c2e0baec811
  rank      : 0 (local_rank: 0)
  exitcode  : -6 (pid: 932)
  error_file: <N/A>
  traceback : Signal 6 (SIGABRT) received by PID 932
====================================================
minji-o-j commented 1 year ago

다시 실행시키면 해결됨