Closed goodls-cs closed 5 months ago
Bot detected the issue body's language is not English, translate it automatically.
Title: [Feature Request]: The output will be truncated when using Azure gpt-4 vision-preview
目前,我们初始提供了 max_tokens ,如果用户主动删掉了这个选项,确实会出现 max_tokens 为空的情况
Bot detected the issue body's language is not English, translate it automatically.
Currently, we initially provide max_tokens. If the user actively deletes this option, max_tokens will indeed be empty.
Bot detected the issue body's language is not English, translate it automatically.
Thanks for the feedback, you can submit a PR directly~~
感谢反馈,我们将尽快解决这个问题
Bot detected the issue body's language is not English, translate it automatically.
Thanks for the feedback, we will resolve this issue as soon as possible
Bot detected the issue body's language is not English, translate it automatically.
https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/pull/4462 Resolved
I just solved the same problem; I hope this can help you guys.
change the code of isVisionModel make sure your model name is included
change the code of visionModel && modelConfig.model.includes("preview") make sure your model name is included
Problem Description
使用 Azure gpt-4 vision-preview 输出被截断问题。 查阅Azure GPT-4 Turbo with Vision使用文档,需设置 "max_tokens" 值,否则返回输出将被截断。
Solution Description
检查模型调用,在配置模型为Azure gpt-4模型时添加max_tokens即可。 修改:app\client\platforms\openai.ts // 定义RequestPayload 类型 interface RequestPayload { messages: { role: "system" | "user" | "assistant"; content: string | MultimodalContent[]; }[]; stream?: boolean; model: string; temperature: number; presence_penalty: number; frequency_penalty: number; top_p: number; max_tokens?: number; } 修改原始代码: const requestPayload: RequestPayload = { messages, stream: options.config.stream, model: modelConfig.model, temperature: modelConfig.temperature, presence_penalty: modelConfig.presence_penalty, frequency_penalty: modelConfig.frequency_penalty, top_p: modelConfig.top_p, // max_tokens: Math.max(modelConfig.max_tokens, 1024), // Please do not ask me why not send max_tokens, no reason, this param is just shit, I dont want to explain anymore. }; // 如果modelConfig.model则添加max_tokens属性 if (modelConfig.model === "azure-gpt-4") { requestPayload.max_tokens = Math.max(modelConfig.max_tokens, 1024); }
解决结果