XLabs-AI / x-flux

Apache License 2.0
1.62k stars 117 forks source link

Error when train controlnet using multi GPU #107

Open 1goodone opened 2 months ago

1goodone commented 2 months ago

the code as follows:

CUDA_VISIBLE_DEVICES=1,2 accelerate launch train_flux_deepspeed_controlnet.py --config "train_configs/test_canny_controlnet.yaml"

ERROE

The following values were not passed to `accelerate launch` and had defaults used instead:
        `--num_processes` was set to a value of `2`
                More than one GPU was found, enabling multi-GPU training.
                If this was unintended please pass in `--num_processes=1`.
        `--num_machines` was set to a value of `1`
        `--mixed_precision` was set to a value of `'no'`
        `--dynamo_backend` was set to a value of `'no'`
To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.
/data/llj/conda_env/k_x_flux/lib/python3.10/site-packages/accelerate/accelerator.py:401: UserWarning: `log_with=wandb` was passed but no supported trackers are currently installed.
  warnings.warn(f"`log_with={log_with}` was passed but no supported trackers are currently installed.")
09/09/2024 22:35:15 - INFO - __main__ - Distributed environment: MULTI_GPU  Backend: nccl
Num processes: 2
Process index: 0
Local process index: 0
Device: cuda:0

Mixed precision type: bf16

DEVICE cuda:0
/data/llj/conda_env/k_x_flux/lib/python3.10/site-packages/accelerate/accelerator.py:401: UserWarning: `log_with=wandb` was passed but no supported trackers are currently installed.
  warnings.warn(f"`log_with={log_with}` was passed but no supported trackers are currently installed.")
09/09/2024 22:35:17 - INFO - __main__ - Distributed environment: MULTI_GPU  Backend: nccl
Num processes: 2
Process index: 1
Local process index: 1
Device: cuda:1

Mixed precision type: bf16

DEVICE cuda:1
Loading checkpoint shards: 100%|████████████████████████████| 2/2 [00:00<00:00,  8.69it/s]
Loading checkpoint shards: 100%|████████████████████████████| 2/2 [00:00<00:00,  9.07it/s]
Init model
Loading checkpoint
Init model
Init AE
Loading checkpoint
Init AE
743.80728 parameters
743.80728 parameters
09/09/2024 22:35:40 - INFO - __main__ - ***** Running training *****
09/09/2024 22:35:40 - INFO - __main__ -   Num Epochs = 5
09/09/2024 22:35:40 - INFO - __main__ -   Instantaneous batch size per device = 3
09/09/2024 22:35:40 - INFO - __main__ -   Total train batch size (w. parallel, distributed & accumulation) = 12
09/09/2024 22:35:40 - INFO - __main__ -   Gradient Accumulation steps = 2
09/09/2024 22:35:40 - INFO - __main__ -   Total optimization steps = 10000
Checkpoint 'latest' does not exist. Starting a new training run.
Steps:   0%|               | 1/10000 [00:05<16:24:03,  5.90s/it, lr=2e-5, step_loss=0.449][rank1]: Traceback (most recent call last):
[rank1]:   File "/data/tcke/x-flux-main/train_flux_deepspeed_controlnet.py", line 316, in <module>
[rank1]:     main()
[rank1]:   File "/data/tcke/x-flux-main/train_flux_deepspeed_controlnet.py", line 227, in main
[rank1]:     block_res_samples = controlnet(
[rank1]:   File "/data/llj/conda_env/k_x_flux/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
[rank1]:     return self._call_impl(*args, **kwargs)
[rank1]:   File "/data/llj/conda_env/k_x_flux/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
[rank1]:     return forward_call(*args, **kwargs)
[rank1]:   File "/data/llj/conda_env/k_x_flux/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1632, in forward
[rank1]:     inputs, kwargs = self._pre_forward(*inputs, **kwargs)
[rank1]:   File "/data/llj/conda_env/k_x_flux/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1523, in _pre_forward
[rank1]:     if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
[rank1]: RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by 
[rank1]: making sure all `forward` function outputs participate in calculating loss. 
[rank1]: If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
[rank1]: Parameter indices which did not receive grad for rank 1: 58 59 60 61 62 63
[rank1]:  In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error
[rank0]: Traceback (most recent call last):
[rank0]:   File "/data/tcke/x-flux-main/train_flux_deepspeed_controlnet.py", line 316, in <module>
[rank0]:     main()
[rank0]:   File "/data/tcke/x-flux-main/train_flux_deepspeed_controlnet.py", line 227, in main
[rank0]:     block_res_samples = controlnet(
[rank0]:   File "/data/llj/conda_env/k_x_flux/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
[rank0]:     return self._call_impl(*args, **kwargs)
[rank0]:   File "/data/llj/conda_env/k_x_flux/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
[rank0]:     return forward_call(*args, **kwargs)
[rank0]:   File "/data/llj/conda_env/k_x_flux/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1632, in forward
[rank0]:     inputs, kwargs = self._pre_forward(*inputs, **kwargs)
[rank0]:   File "/data/llj/conda_env/k_x_flux/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1523, in _pre_forward
[rank0]:     if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
[rank0]: RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by 
[rank0]: making sure all `forward` function outputs participate in calculating loss. 
[rank0]: If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
[rank0]: Parameter indices which did not receive grad for rank 0: 58 59 60 61 62 63
[rank0]:  In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error
Steps:   0%|               | 1/10000 [00:06<17:46:42,  6.40s/it, lr=2e-5, step_loss=0.449]
[rank0]:[W909 22:35:47.079195754 ProcessGroupNCCL.cpp:1168] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present,  but this warning has only been added since PyTorch 2.4 (function operator())
E0909 22:35:49.951745 139946843719488 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 0 (pid: 4041574) of binary: /data/llj/conda_env/k_x_flux/bin/python
Traceback (most recent call last):
  File "/data/llj/conda_env/k_x_flux/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/data/llj/conda_env/k_x_flux/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 46, in main
    args.func(args)
  File "/data/llj/conda_env/k_x_flux/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1073, in launch_command
    multi_gpu_launcher(args)
  File "/data/llj/conda_env/k_x_flux/lib/python3.10/site-packages/accelerate/commands/launch.py", line 718, in multi_gpu_launcher
    distrib_run.run(args)
  File "/data/llj/conda_env/k_x_flux/lib/python3.10/site-packages/torch/distributed/run.py", line 892, in run
    elastic_launch(
  File "/data/llj/conda_env/k_x_flux/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 133, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/data/llj/conda_env/k_x_flux/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
train_flux_deepspeed_controlnet.py FAILED
------------------------------------------------------------
Failures:
[1]:
  time      : 2024-09-09_22:35:49
  host      : mlab-aicloud-prod-dgx0004
  rank      : 1 (local_rank: 1)
  exitcode  : 1 (pid: 4041575)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2024-09-09_22:35:49
  host      : mlab-aicloud-prod-dgx0004
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 4041574)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
ShuoZhang2003 commented 1 month ago

Hello, have you managed to resolve this issue? I'm encountering the same problem.

xhlin129 commented 1 month ago

Same problem! Does this problem have anything to do with the environment or the version of pytorch?

Sainthousand commented 1 month ago

same problem, so sad, have you solved this?