QwenLM / Qwen

The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Apache License 2.0
13.59k stars 1.11k forks source link

使用AutoModelForCausalLM加载7B,调用chat_stream报错ValueError: too many values to unpack (expected 2) #1009

Closed kunzeng-ch closed 6 months ago

kunzeng-ch commented 8 months ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

tokenizer = AutoTokenizer.from_pretrained("Qwen-7B-Chat",trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("Qwen-7B-Chat", quantization_config=quantization_config, device_map="cuda:1", trust_remote_code=True, fp16=True).eval() 使用model.chat_stream(self.tokenizer, prompt_1, history=None, stop_words_ids=react_stop_words_tokens) 直接报错ValueError: too many values to unpack (expected 2)

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

jklj077 commented 8 months ago

提供的代码里self.tokenizer的self是啥?

jklj077 commented 6 months ago

As the error log is incomplete and reprodution is also not possible, we are unable to provide assistance.

Here are the general trouble-shooting instructions:

  1. Update the environment and try using latest code.
  2. Our best guess is you are using example code from langchain_tooluse.ipynb, but change model.chat to model.chat_stream on your own, thus causing the error. Please note model.chat_stream and model.chat has different return types.