horseee / LLM-Pruner

[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baichuan, TinyLlama, etc.
https://arxiv.org/abs/2305.11627
Apache License 2.0
871 stars 102 forks source link

recover training #17

Open xcg940123 opened 1 year ago

xcg940123 commented 1 year ago

when I start recover training Baichuan-7B, I meet the bug.

Exception has occurred: RuntimeError Caught RuntimeError in replica 1 on device 1. Original Traceback (most recent call last): File "/opt/miniconda3/envs/flash/lib/python3.9/site-packages/torch/nn/parallel/parallel_apply.py", line 64, in _worker output = module(*input, kwargs) File "/opt/miniconda3/envs/flash/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, *kwargs) File "/home/jovyan/honor/xcg/LLM-Pruner-main/LLMPruner/peft/peft_model.py", line 664, in forward return self.base_model( File "/opt/miniconda3/envs/flash/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(input, kwargs) File "/home/jovyan/honor/xcg/LLM-Pruner-main/LLMPruner/models/hf_baichuan/baichuan7B/modeling_baichuan_7B.py", line 610, in forward outputs = self.model( File "/opt/miniconda3/envs/flash/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, *kwargs) File "/home/jovyan/honor/xcg/LLM-Pruner-main/LLMPruner/models/hf_baichuan/baichuan7B/modeling_baichuan_7B.py", line 452, in forward inputs_embeds = self.embed_tokens(input_ids) File "/opt/miniconda3/envs/flash/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1212, in _call_impl result = forward_call(input, **kwargs) File "/opt/miniconda3/envs/flash/lib/python3.9/site-packages/torch/nn/modules/sparse.py", line 160, in forward return F.embedding( File "/opt/miniconda3/envs/flash/lib/python3.9/site-packages/torch/nn/functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument index in method wrapper__index_select) File "/home/jovyan/honor/xcg/LLM-Pruner-main/post_training.py", line 199, in main trainer.train(resume_from_checkpoint=args.resume_from_checkpoint) File "/home/jovyan/honor/xcg/LLM-Pruner-main/post_training.py", line 244, in main(args)

How can I fix it?

horseee commented 1 year ago

Hi. Please refer to this line:

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!

Please check whether the model and the input is located on the same device.