THUDM / GLM-4

GLM-4 series: Open Multilingual Multimodal Chat LMs | 开源多语言多模态对话模型
Apache License 2.0
4.89k stars 403 forks source link

ptuning-v2微调后inference报错peft不支持prompt learning #303

Closed cb4ever closed 3 months ago

cb4ever commented 3 months ago

System Info / 系統信息

使用finetune_demo中的代码进行ptuning-v2微调后,调用微调结果inference,报错:File "/root/miniconda3/lib/python3.12/site-packages/peft/mapping.py", line 169, in inject_adapter_in_model raise ValueError("create_and_replace does not support prompt learning and adaption prompt yet.")

Who can help? / 谁可以帮助到您?

No response

Information / 问题信息

Reproduction / 复现过程

1,使用自己的数据,把ptuning-v2.yaml中部分参数修改,如max_input_length: 512 max_output_length: 512 2,使用finetune.py训练5000轮后保存 3,使用inference.py调用model_dir='output/checkpoint-5000'进行推理。出现如上报错信息。

Expected behavior / 期待表现

能够正常运行推理。

cb4ever commented 3 months ago

Thanks a lot!

---原始邮件--- 发件人: @.> 发送时间: 2024年7月8日(周一) 下午4:51 收件人: @.>; 抄送: @.**@.>; 主题: Re: [THUDM/GLM-4] ptuning-v2微调后inference报错peft不支持prompt learning (Issue #303)

Closed #303 as completed via da2fe37.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>