QwenLM / Qwen-VL

The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.
Other
5.03k stars 380 forks source link

[BUG] <title> #475

Closed daje0601 closed 1 month ago

daje0601 commented 1 month ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

I read the FAQ and all the GitHub issues to fine-tune lora. The training completed normally and the model was saved well in the output_qwen folder, as shown in the image below.

image

While loading the model using the code you provided, I encountered the following issue: AttributeError: 'QWenTokenizer' object has no attribute 'IMAGE_ST'. I have no idea why this is happening, so I'm reaching out to you.

from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-VL-Chat", trust_remote_code=True)

model = AutoPeftModelForCausalLM.from_pretrained(
    "./output_qwen",
    device_map="auto",
    trust_remote_code=True,
    revision="master",
).eval()

Please mercy on me 🙏🙏🙏

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python: 3.9
- Transformers:4.44.2
- PyTorch:2.4.1+cu121
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):12.4

备注 | Anything else?

No response

daje0601 commented 1 month ago

I solve it. It is my mistake.

When I installed TRL, the version of transformers changed. I reverted the transformers version back to the original and it works fine.