QwenLM / Qwen

The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Apache License 2.0
13.59k stars 1.11k forks source link

怎么控制生成的token数量 #1145

Closed Csy03 closed 5 months ago

Csy03 commented 6 months ago

如题……有没有大佬研究过qwen模型在哪能控制模型每次输出的token个数为定值呢

jklj077 commented 6 months ago

I don't think it is possible. You could instead set the number of max new tokens in generation_config.json. Also I'm very curious why do you need to make the generated token length a fix number? What is the actual scenario?

Csy03 commented 6 months ago

Thanks for your advice! I've tried to change that max_new_token but maybe I did it the wrong way and it didn't help... And I'm sorry I don't know exactly what the actual scenario is, so I am still trying.

---- 回复的原邮件 ---- | 发件人 | Ren @.> | | 日期 | 2024年03月13日 21:09 | | 收件人 | @.> | | 抄送至 | @.>@.> | | 主题 | Re: [QwenLM/Qwen] 怎么控制生成的token数量 (Issue #1145) |

I don't think it is possible. You could instead set the number of max new tokens in generation_config.json. Also I'm very curious why do you need to make the generated token length a fix number? What is the actual scenario?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

github-actions[bot] commented 5 months ago

This issue has been automatically marked as inactive due to lack of recent activity. Should you believe it remains unresolved and warrants attention, kindly leave a comment on this thread. 此问题由于长期未有新进展而被系统自动标记为不活跃。如果您认为它仍有待解决,请在此帖下方留言以补充信息。