QwenLM / Qwen

The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Apache License 2.0
12.47k stars 1.01k forks source link

[BUG] <关于model.generate时发现的源码错误> #1231

Closed malidong521 closed 2 months ago

malidong521 commented 2 months ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

在modeling_qwen.py 的第523行 if not self.use_cache_quantization and SUPPORT_TORCH2: if attention_mask is not None: attention_mask = attention_mask.expand( -1, -1, causal_mask.size(2), -1
) if causal_mask is not None: attention_mask = attention_mask.masked_fill(~causal_mask, torch.finfo(query.dtype).min) else: attention_mask = causal_mask

期望行为 | Expected Behavior

if not self.use_cache_quantization and SUPPORT_TORCH2: if attention_mask is not None: attention_mask = attention_mask.expand( -1, -1, query.size(2), -1
) if causal_mask is not None: attention_mask = attention_mask.masked_fill(~causal_mask, torch.finfo(query.dtype).min) else: attention_mask = causal_mask

复现方法 | Steps To Reproduce

每当调用Qwen-7B-Chat的model.generate方法时,总会报错AttributeError: 'NoneType' object has no attribute 'size',发现源码逻辑有误,causal_mask参数的is None 判断在causal_mask.size下方,当causal_mask参数为None 时触发报错,经调研后发现causal_mask.size应换为query.size

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

jklj077 commented 2 months ago

It should have been fixed a long time ago. See for example: https://huggingface.co/Qwen/Qwen-7B-Chat/blob/main/modeling_qwen.py#L521-L531

Please try pulling the latest HuggingFace repo.

malidong521 commented 2 months ago

It should have been fixed a long time ago. See for example: https://huggingface.co/Qwen/Qwen-7B-Chat/blob/main/modeling_qwen.py#L521-L531

Please try pulling the latest HuggingFace repo.

ok!thanks!