QwenLM / Qwen-VL

The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.
Other
5.11k stars 386 forks source link

TypeError: _BaseAutoPeftModel.from_pretrained() missing 1 required positional argument: 'pretrained_model_name_or_path' #330

Open Egber1t opened 8 months ago

Egber1t commented 8 months ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

w微调成功后,进行eval和合并时都报缺少这个pretrained_model_name_or_path'

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

from peft import AutoPeftModelForCausalLM

model = AutoPeftModelForCausalLM.from_pretrained( pretrained_model_name_or_path='/root/autodl-tmp/Qwen-VL-Chat', path_to_adapter='/root/autodl-tmp/Qwen-VL/output_qwen', # path to the output directory device_map="auto", trust_remote_code=True ).eval()

运行环境 | Environment

- OS:LInux
- Python:3.10
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

elesun2018 commented 5 months ago

readme里面lora模型合并 from peft import AutoPeftModelForCausalLM

model = AutoPeftModelForCausalLM.from_pretrained( path_to_adapter, # path to the output directory device_map="auto", trust_remote_code=True ).eval()

merged_model = model.merge_and_unload()

里面并没有涉及pretrained_model_name_or_path?