QwenLM / Qwen

The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Apache License 2.0
12.47k stars 1.01k forks source link

[BUG] lora微调后,合并成一个模型。这种方式如何加载且推理 #1220

Closed wangyao123456a closed 1 month ago

wangyao123456a commented 2 months ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

No response

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

jklj077 commented 2 months ago

As you would normally use a pretrained model. See https://github.com/QwenLM/Qwen#quickstart for a quick start.

wangyao123456a commented 2 months ago

@jklj077 but 按照你的建议运行出会提示 lora 参数没有加载上,请问配置文件需要对应修改吗

jklj077 commented 2 months ago

Show your code.

github-actions[bot] commented 1 month ago

This issue has been automatically marked as inactive due to lack of recent activity. Should you believe it remains unresolved and warrants attention, kindly leave a comment on this thread. 此问题由于长期未有新进展而被系统自动标记为不活跃。如果您认为它仍有待解决,请在此帖下方留言以补充信息。