QwenLM / Qwen-VL

The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.
Other
4.88k stars 373 forks source link

[BUG] <title> Unable to load trained LoRa model weights using AutoPeftModelForCausalLM.from_pretrained() #379

Open jweihe opened 4 months ago

jweihe commented 4 months ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

I encountered an issue while trying to load the weights of a trained LoRa model using the AutoPeftModelForCausalLM.from_pretrained() method. The error message indicates that the tokenization_qwen.py file could not be located in the specified checkpoint directory. Here is the error message I received: Could not locate the tokenization_qwen.py inside output_model_qwen_hr-llm-vit/checkpoint OSError:output_model_qwen_hr-llm-vit/checkpoint does not appear to have a file named tokenization_qwen.py. The code I used to load the model is: model = AutoPeftModelForCausalLM.from_pretrained(args.checkpoint, device_map='cuda', trust_remote_code=True).eval() I would appreciate any guidance on how to resolve this issue and successfully load the trained LoRa model weights.

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

elesun2018 commented 3 months ago

image 请问AutoPeftModelForCausalLM是如何知道lora合并的base模型路径的。 如何才能分析判断lora模型 合并后的模型是正常的,谢谢