hiyouga / LLaMA-Factory

Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
31.78k stars 3.9k forks source link

lora微调qwen14b如果使用了safetensors会报错OSError: No such device (os error 19),不使用safetensors然后合并完权重在推理的时候使用合并完的模型也会报一样的错误,如果在推理的时候提供原始模型和lora模型一起就执行正常。 #2013

Closed AEProgrammer closed 9 months ago

AEProgrammer commented 9 months ago

Reminder

Reproduction

微调参数 CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \ --stage sft \ --do_train \ --model_name_or_path qwen/Qwen-14B-Chat \ --dataset mofang_qa \ --template default \ --finetuning_type lora \ --lora_target c_attn \ --output_dir /code/liuhui67/LLM_finetune/lora_model_dir/lora_qwen_14b_v1 \ --overwrite_cache \ --per_device_train_batch_size 10 \ --gradient_accumulation_steps 4 \ --lr_scheduler_type cosine \ --logging_steps 10 \ --save_steps 100 \ --learning_rate 5e-5 \ --num_train_epochs 3.0 \ --plot_loss \ --flash_attn \ --save_safetensors False

合并参数 python src/export_model.py \ --model_name_or_path /root/.cache/modelscope/hub/qwen/Qwen-14B-Chat \ --adapter_name_or_path /code/liuhui67/LLM_finetune/lora_model_dir/lora_qwen_14b_v1/tmp-checkpoint-100\ --template defalut \ --finetuning_type lora \ --export_dir /code/liuhui67/LLM_finetune/merged_model/merged_qwen_14b_v1

推理参数 python src/cli_demo.py \ --model_name_or_path /code/liuhui67/LLM_finetune/merged_model/merged_qwen_14b_v1 \ --template default \ --finetuning_type lora \

Expected behavior

lora微调qwen14b如果使用了safetensors会报错OSError: No such device (os error 19),不使用safetensors然后合并完权重在推理的时候使用合并完的模型也会报一样的错误,如果在推理的时候提供原始模型和lora模型一起就执行正常。

System Info

Others

No response

AEProgrammer commented 9 months ago

报错信息如下: Loading checkpoint shards: 0%| | 0/29 [00:00<?, ?it/s] Traceback (most recent call last): File "/code/liuhui67/LLM_finetune/src/cli_demo.py", line 47, in main() File "/code/liuhui67/LLM_finetune/src/cli_demo.py", line 13, in main chat_model = ChatModel() File "/code/liuhui67/LLM_finetune/src/llmtuner/chat/chat_model.py", line 27, in init self.model, self.tokenizer = load_model_and_tokenizer( File "/code/liuhui67/LLM_finetune/src/llmtuner/model/loader.py", line 87, in load_model_and_tokenizer model = AutoModelForCausalLM.from_pretrained( File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 561, in from_pretrained return model_class.from_pretrained( File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3706, in from_pretrained ) = cls._load_pretrained_model( File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/transformers/modeling_utils.py", line 4091, in _load_pretrained_model state_dict = load_state_dict(shard_file) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/transformers/modeling_utils.py", line 503, in load_state_dict with safe_open(checkpoint_file, framework="pt") as f: OSError: No such device (os error 19)

fangcao1314 commented 9 months ago

同样的错误

hiyouga commented 9 months ago

合并时候指定 --export_legacy_format