Closed Dreamry2C closed 1 month ago
Bot detected the issue body's language is not English, translate it automatically.
Title: [Bug] An error occurs when using the gpt-4o model
https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/blob/f55f04ab4f23fc5a5ac84aba0d2d926c557816c3/app/client/platforms/openai.ts#L119-L134
上传的请求不包含max_tokens可能导致此问题,aiproxy.io在不包含此值的情况下可能会冻结最大参数。
aiproxy文档
我的临时解决办法是模仿L131的vision model添加一个针对4o的max_tokens。
if (modelConfig.model.includes("4o")) { requestPayload["max_tokens"] = Math.max(modelConfig.max_tokens, 4000); }
Bot detected the issue body's language is not English, translate it automatically.
https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/blob/f55f04ab4f23fc5a5ac84aba0d2d926c557816c3/app/client/platforms/openai.ts#L119-L134 Uploading requests not containing max_tokens may cause this issue, aiproxy.io may freeze the max parameters without including this value https://docs.aiproxy.io/guide/deduction. My temporary solution is to imitate the vision model of L131 and add a max_tokens for 4o.
if (modelConfig.model.includes("4o")) { requestPayload["max_tokens"] = Math.max(modelConfig.max_tokens, 4000); }
感谢反馈!max_tokens应该是与具体模型绑定在一起的参数,我们也意识到了这个问题,并将在下个大版本中提供给用户自定义模型的能力,支持自定义每个模型的max_tokens,但是,在目前这个版本上我们不打算修改回早期时在chat接口中传递max_tokens参数,原因是,对于大模型来说,这个参数是非必传参数,传递了这个参数返回会限制和影响很多大模型返回新消息的准确性和完整性
Bot detected the issue body's language is not English, translate it automatically.
Thanks for the feedback! max_tokens should be a parameter bound to a specific model. We are also aware of this problem and will provide users with the ability to customize models in the next major version, supporting the customization of max_tokens for each model. However, at present, this In terms of version, we do not plan to change back to the early days of passing the max_tokens parameter in the chat interface. The reason is that for large models, this parameter is a mandatory parameter. Passing this parameter will limit and affect the accuracy of new messages returned by many large models. sex and integrity
建议
上传的请求不包含max_tokens可能导致此问题,aiproxy.io在不包含此值的情况下可能会冻结最大参数。 aiproxy文档 我的临时解决办法是模仿L131的vision model添加一个针对4o的max_tokens。
if (modelConfig.model.includes("4o")) { requestPayload["max_tokens"] = Math.max(modelConfig.max_tokens, 4000); }
建议先加上这个临时解决方案,否则现在根本用不了 gpt-4o
Bot detected the issue body's language is not English, translate it automatically.
suggestion
Uploaded requests not containing max_tokens may cause this issue, aiproxy.io may freeze the max parameters without including this value. aiproxy documentation My temporary solution is to imitate the vision model of L131 and add a max_tokens for 4o.
if (modelConfig.model.includes("4o")) { requestPayload["max_tokens"] = Math.max(modelConfig.max_tokens, 4000); }
It is recommended to add this temporary solution first, otherwise you will not be able to use gpt-4o at all now.
Bug Description
Chinese:
使用gpt-4o模型发送信息会显示: { "error": { "message": "max_tokens is too large: 59812. This model supports at most 4096 completion tokens, whereas you provided 59812.", "type": null, "param": "max_tokens", "code": null } } 事实上我并没有设置和使用那么多tokens。 还有一点,我使用的是第三方的api,但我认为这有影响不大。 使用其他模型一切正常。
English:
Sending a message using the gpt-4o model displays: { "error": { "message": "max_tokens is too large: 59812. This model supports at most 4096 completion tokens, whereas you provided 59812.", "type": null, "param": "max_tokens", "code": null } } I haven't actually set up and used that many tokens
Another point is that I use a third-party api, but I don't think it has much impact other models are fine.
Steps to Reproduce
1.打开可执行程序(.exe) 2.设置选择gpt4o模型 3.发送任何消息 4.提示错误
Expected Behavior
能够正常发送和接收消息
Messages can be sent and received normally
Screenshots
The Settings have not changed except as modified in the figure
Deployment Method
Desktop OS
Windows10
Desktop Browser
edge, But I'm using the executable file
Desktop Browser Version
124.0.2478.97
Smartphone Device
No response
Smartphone OS
No response
Smartphone Browser
No response
Smartphone Browser Version
No response
Additional Logs
No response