Closed kunzeng-ch closed 6 months ago
提供的代码里self.tokenizer的self是啥?
As the error log is incomplete and reprodution is also not possible, we are unable to provide assistance.
Here are the general trouble-shooting instructions:
langchain_tooluse.ipynb
, but change model.chat
to model.chat_stream
on your own, thus causing the error. Please note model.chat_stream
and model.chat
has different return types.
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
当前行为 | Current Behavior
tokenizer = AutoTokenizer.from_pretrained("Qwen-7B-Chat",trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("Qwen-7B-Chat", quantization_config=quantization_config, device_map="cuda:1", trust_remote_code=True, fp16=True).eval() 使用model.chat_stream(self.tokenizer, prompt_1, history=None, stop_words_ids=react_stop_words_tokens) 直接报错ValueError: too many values to unpack (expected 2)
期望行为 | Expected Behavior
No response
复现方法 | Steps To Reproduce
No response
运行环境 | Environment
备注 | Anything else?
No response