Closed sasso-effe closed 1 year ago
The solution was super dumb. It's sufficient to follow what stated in the error:
ddp_kwargs = DistributedDataParallelKwargs(find_unused_parameters=True)
accelerator = Accelerator(
gradient_accumulation_steps=args.gradient_accumulation_steps,
mixed_precision=args.mixed_precision,
log_with=args.report_to,
project_config=accelerator_project_config,
kwargs_handlers=[ddp_kwargs]
)
Hello~! Thx for your code. I change code according to your way. But get this error:
Traceback (most recent call last):
File "/home/xyf/Personalization/HyperDreamBooth/./train_preoptimized_liloras.py", line 1225, in <module>
main(args)
File "/home/xyf/Personalization/HyperDreamBooth/./train_preoptimized_liloras.py", line 1111, in main
accelerator.backward(loss)
File "/home/xyf/miniconda3/envs/py310/lib/python3.10/site-packages/accelerate/accelerator.py", line 1853, in backward
loss.backward(**kwargs)
File "/home/xyf/miniconda3/envs/py310/lib/python3.10/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "/home/xyf/miniconda3/envs/py310/lib/python3.10/site-packages/torch/autograd/__init__.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons:
1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across
multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not
change during training loop.
2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap
the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes
multiple times, and hence marking a variable ready multiple times.
DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph
does not change over iterations.
Parameter at index 14999 has been marked as ready twice. This means that multiple autograd engine hooks have fired for this
particular parameter during this iteration. You can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or
DETAIL to print parameter names for further debugging.
The error suggests that certain parameters are being used to compute the loss twice. Have you come across this issue before?
Nope, I didn't have this issue. Are you using the .sh
file to launch the script? What version of torch are you using?
Nope, I didn't have this issue. Are you using the
.sh
file to launch the script? What version of torch are you using?
The code for DDP works good in 3090, but it fails on the A100.
Hi! Thank you for the amazing work. Training on a single GPU is quite slow, and since the project uses accelerate, I was expecting it to run also on multiple GPUs. However, after some small tweaks (removing the
--put_in_cpu
flag when training the preoptimized loras, and substitutinghypernetwork.train_params()
withhypernetwork.module.train_params()
) I am stuck with this error:Am I doing something wrong, or is the code intended to run only on a single GPU?