QwenLM / Qwen

The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Apache License 2.0
13.59k stars 1.11k forks source link

LoRA微调合并后回答的问题不是自己准备的数据集中的数据 #1136

Closed xiaohaiqing closed 6 months ago

xiaohaiqing commented 6 months ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

No response

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

LoRA在微调合并后,使用如下方式进行问答:

from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig 
tokenizer = AutoTokenizer.from_pretrained("qwen-7b-chat-int4", trust_remote_code=True)

model = AutoModelForCausalLM.from_pretrained("qwen-7b-chat-int4", device_map="auto", 
trust_remote_code=True).eval() 
response, history = model.chat(tokenizer, "类型#上衣*材质#牛仔布*颜色#白色*风格#简约*图案#刺绣*衣样式#外套*衣款式#破洞", history=None) 
print(response)

系统回复的答案并不是数据集中定义的,这是怎么回事呢?

Pierre-Wong commented 6 months ago

我之前数据过少然后batch size过大,训练出来没有效果 后来把梯度累积改成1就没有问题了