QwenLM / Qwen

The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Apache License 2.0
13.59k stars 1.11k forks source link

[BUG]使用model.chat()进行infer时报错 #1039

Closed 128Ghe980 closed 8 months ago

128Ghe980 commented 8 months ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

单机8卡A100,使用了单卡 按照官方的方法写了一段测试 import json import torch import random import hashlib import time import json import requests from transformers import AutoModelForCausalLM, AutoTokenizer from transformers import GenerationConfig

os.environ['CUDA_VISIBLE_DEVICES'] = '0' tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", trust_remote_code=True ).eval() restest, = model.chat(tokenizer, '什么事', history=None) print(res_test)

随后报错: Traceback (most recent call last): File "/home/tione/notebook/code/Qwen-main/output_qwen/eval_summary.py", line 44, in restest, = model.chat(tokenizer, '什么事', history=None) File "/root/.cache/huggingface/modules/transformers_modules/output_qwen/modeling_qwen.py", line 1137, in chat outputs = self.generate( File "/root/.cache/huggingface/modules/transformers_modules/output_qwen/modeling_qwen.py", line 1259, in generate return super().generate( File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py", line 1642, in generate return self.sample( File "/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py", line 2724, in sample outputs = self( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _call_impl return forward_call(args, kwargs) File "/root/.cache/huggingface/modules/transformers_modules/output_qwen/modeling_qwen.py", line 1043, in forward transformer_outputs = self.transformer( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _call_impl return forward_call(*args, *kwargs) File "/root/.cache/huggingface/modules/transformers_modules/output_qwen/modeling_qwen.py", line 891, in forward outputs = block( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _call_impl return forward_call(*args, kwargs) File "/root/.cache/huggingface/modules/transformers_modules/output_qwen/modeling_qwen.py", line 610, in forward attn_outputs = self.attn( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _call_impl return forward_call(args, kwargs) File "/root/.cache/huggingface/modules/transformers_modules/output_qwen/modeling_qwen.py", line 418, in forward query, key, value = mixed_x_layer.split(self.split_size, dim=2) ValueError: not enough values to unpack (expected 3, got 2)

请问是哪里的问题呢?

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:3.10.6
- Transformers:4.32.0
- PyTorch:2.1.0a0+b5021ba
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):12.1

备注 | Anything else?

No response