Open nikhil-pandey opened 1 day ago
👀 @nikhil-pandey
Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. Please make sure you have given us as much context as possible.\ 非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。
Hey @nikhil-pandey! 👋 I'm here to help you with any bugs, questions, or contributions you have. Let's squash those bugs together while we wait for a human maintainer.
The error you're encountering is due to the use of the max_tokens
parameter, which is not supported by the Azure OpenAI O1 models. Instead, you should use max_completion_tokens
.
To resolve this issue, you need to modify the request construction in the codebase to replace max_tokens
with max_completion_tokens
for the Azure OpenAI O1 models. This change should be made in the part of the code where the request parameters are set for Azure OpenAI models, specifically in the LobeAzureOpenAI
class or wherever the request payload is being constructed [1].
If you haven't explicitly set a max token limit, ensure that the default configuration or any conditional logic in the code does not automatically include max_tokens
for these models. You might need to update the logic to conditionally set max_completion_tokens
instead, based on the model being used.
To continue talking to Dosu, mention @dosu.
📦 Environment
Vercel
📌 Version
Latest main / v1.21.4
💻 Operating System
macOS
🌐 Browser
Safari
🐛 Bug Description
{ "error": { "message": "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.", "type": "invalid_request_error", "param": "max_tokens", "code": "unsupported_parameter" }, "endpoint": "https://***.openai.azure.com/", "provider": "azure" }
I haven’t set any max token limit. Max token field should be removed from the request body
📷 Recurrence Steps
Use Azure OpenAI o1 models
🚦 Expected Behavior
Should work
📝 Additional Information
-