This repository is for active development of the Azure SDK for .NET. For consumers of the SDK we recommend visiting our public developer docs at https://learn.microsoft.com/dotnet/azure/ or our versioned developer docs at https://azure.github.io/azure-sdk-for-net.
Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.
at Azure.AI.OpenAI.ClientPipelineExtensions.ProcessMessageAsync(ClientPipeline pipeline, PipelineMessage message, RequestOptions options)
at Azure.AI.OpenAI.Chat.AzureChatClient.CompleteChatAsync(BinaryContent content, RequestOptions options)
at OpenAI.Chat.ChatClient.CompleteChatAsync(IEnumerable`1 messages, ChatCompletionOptions options, CancellationToken cancellationToken)
Expected behavior
MaxOutputTokenCount should set correct param based on the model
Actual behavior
Wrong param is set
Reproduction Steps
Call o1 model with setting MaxOutputTokenCount to a number
Library name and version
Azure.AI.OpenAI 2.1.0-beta.1
Describe the bug
Setting MaxOutputTokenCount cause API call error with o1 model:
System.ClientModel.ClientResultException: HTTP 400 (invalid_request_error: unsupported_parameter) Parameter: max_tokens
Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead. at Azure.AI.OpenAI.ClientPipelineExtensions.ProcessMessageAsync(ClientPipeline pipeline, PipelineMessage message, RequestOptions options) at Azure.AI.OpenAI.Chat.AzureChatClient.CompleteChatAsync(BinaryContent content, RequestOptions options) at OpenAI.Chat.ChatClient.CompleteChatAsync(IEnumerable`1 messages, ChatCompletionOptions options, CancellationToken cancellationToken)
Expected behavior
MaxOutputTokenCount should set correct param based on the model
Actual behavior
Wrong param is set
Reproduction Steps
Call o1 model with setting MaxOutputTokenCount to a number
Environment
No response