THUDM / ChatGLM2-6B

ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
Other
15.65k stars 1.85k forks source link

[Help] <title>用lora微调遇到这个问题RuntimeError: Expected to mark a variable ready only once. #645

Open wfllyzh opened 7 months ago

wfllyzh commented 7 months ago

Is there an existing issue for this?

Current Behavior

使用lora微调,在执行train.sh后提示:RuntimeError: Expected to mark a variable ready only once. main() File "/home/admin/atec_project/main.py", line 372, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/admin/atec_project/env_s/lib/python3.9/site-packages/transformers/trainer.py", line 1553, in train return inner_training_loop( File "/home/admin/atec_project/env_s/lib/python3.9/site-packages/transformers/trainer.py", line 1835, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/home/admin/atec_project/env_s/lib/python3.9/site-packages/transformers/trainer.py", line 2690, in training_step self.accelerator.backward(loss) File "/home/admin/atec_project/env_s/lib/python3.9/site-packages/accelerate/accelerator.py", line 1985, in backward loss.backward(*kwargs) File "/home/admin/atec_project/env_s/lib/python3.9/site-packages/torch/_tensor.py", line 487, in backward torch.autograd.backward( File "/home/admin/atec_project/env_s/lib/python3.9/site-packages/torch/autograd/init.py", line 200, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "/home/admin/atec_project/env_s/lib/python3.9/site-packages/torch/autograd/function.py", line 274, in apply return user_fn(self, args) File "/home/admin/atec_project/env_s/lib/python3.9/site-packages/torch/utils/checkpoint.py", line 157, in backward torch.autograd.backward(outputs_with_grad, args_with_grad) File "/home/admin/atec_project/env_s/lib/python3.9/site-packages/torch/autograd/init.py", line 200, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the forward function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple checkpoint functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations. Parameter at index 55 has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration. You can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print parameter names for further debugging.

Expected Behavior

No response

Steps To Reproduce

使用lora微调,在执行train.sh后提示:RuntimeError: Expected to mark a variable ready only once. main() File "/home/admin/atec_project/main.py", line 372, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/admin/atec_project/env_s/lib/python3.9/site-packages/transformers/trainer.py", line 1553, in train return inner_training_loop( File "/home/admin/atec_project/env_s/lib/python3.9/site-packages/transformers/trainer.py", line 1835, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/home/admin/atec_project/env_s/lib/python3.9/site-packages/transformers/trainer.py", line 2690, in training_step self.accelerator.backward(loss) File "/home/admin/atec_project/env_s/lib/python3.9/site-packages/accelerate/accelerator.py", line 1985, in backward loss.backward(*kwargs) File "/home/admin/atec_project/env_s/lib/python3.9/site-packages/torch/_tensor.py", line 487, in backward torch.autograd.backward( File "/home/admin/atec_project/env_s/lib/python3.9/site-packages/torch/autograd/init.py", line 200, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "/home/admin/atec_project/env_s/lib/python3.9/site-packages/torch/autograd/function.py", line 274, in apply return user_fn(self, args) File "/home/admin/atec_project/env_s/lib/python3.9/site-packages/torch/utils/checkpoint.py", line 157, in backward torch.autograd.backward(outputs_with_grad, args_with_grad) File "/home/admin/atec_project/env_s/lib/python3.9/site-packages/torch/autograd/init.py", line 200, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the forward function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple checkpoint functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations. Parameter at index 55 has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration. You can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print parameter names for further debugging.

Environment

- OS:
- Python:3.9
- Transformers:4.33.2
- PyTorch:2.0.1
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :true

Anything else?

No response

hhy150 commented 6 months ago

你好,请问哪些可以微调lora呢?