QwenLM / Qwen

The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Apache License 2.0
13.59k stars 1.11k forks source link

[BUG] <title>qwen model.generate 中添加 max_new_tokens 参数不生效, 新增生成的还是超出了最大长度 #1029

Closed xiaoduozhou closed 8 months ago

xiaoduozhou commented 8 months ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

batch_out_ids = self.model.generate( batch_input_ids, stop_words_ids=stop_words_ids, return_dict_in_generate=False, generation_config=self.model.generation_config, max_new_tokens=25 )

max_new_tokens 不生效, 生成的最大长度还是大于25个字符

期望行为 | Expected Behavior

有什么方法能限制生成的最大长度吗

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

jklj077 commented 8 months ago

改generation_config的max_new_tokens,另外 字符 ≠ token。

xiaoduozhou commented 8 months ago

改generation_config的max_new_tokens,另外 字符 ≠ token。

确实是这个问题,改generation_config的 max_new_tokens其实是生效了的, 中文分词 有可能几个字是一个token, 所以约束是ok的