Open aeras3637 opened 6 days ago
This error is related to the Azure SDK
and can be found here:
If that helps as part of the issue above, seems that to avoid this error on happening would be to not set the max tokens in the settings and you should be able to execute.
Another approach that you might want to use is intercept the httprequest call with a Custom HttpHandler
and modify the content body of your request to send max_completion_tokens
instead of max_tokens
when targeting the o1
model.
Hi @aeras3637 , Here is a sample of fix that can be done with a custom HttpHandler : https://github.com/mathieumack/MDev.Dotnet.SemanticKernel/blob/feature%2F4-net9/src%2FMDev.Dotnet.SemanticKernel.Connectors.AzureAIStudio.Gpt4o1%2FHttpClientHandlers%2FAzureOpenAIHttpClientHandler.cs Note: It's not just system messages but also temperature and tools that failed.
Describe the bug The latest library of SemanticKernel does not support Azure o1 series models. Reason for error: Unsupported parameter: 'max_tokens' is not supported with this model Use 'max_completion_tokens' instead.
To Reproduce
Platform
Additional context
Microsoft.SemanticKernel.HttpOperationException HResult=0x80131500 Message=HTTP 400 (invalid_request_error: unsupported_parameter) Parameter: max_tokens
Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead. Source=Microsoft.SemanticKernel.Connectors.OpenAI スタック トレース: 場所 Microsoft.SemanticKernel.Connectors.OpenAI.ClientCore.d73`1.MoveNext()
場所 Microsoft.SemanticKernel.Connectors.OpenAI.ClientCore.d 16.MoveNext()
場所 Microsoft.SemanticKernel.KernelFunctionFromPrompt.d25.MoveNext()
場所 Microsoft.SemanticKernel.KernelFunctionFromPrompt.d 6.MoveNext()
場所 System.Threading.Tasks.ValueTask`1.get_Result()
場所 Microsoft.SemanticKernel.KernelFunction.<>c__DisplayClass27_0.<b0>d.MoveNext()
場所 Microsoft.SemanticKernel.Kernel.d 34.MoveNext()
場所 Microsoft.SemanticKernel.Kernel.d33.MoveNext()
場所 Microsoft.SemanticKernel.KernelFunction.d 27.MoveNext()
場所 AOAI.Program.d__0.MoveNext() (D:\sandbox\AOAI\Program.cs):行 35
この例外は、最初にこの呼び出し履歴 [外部コード] でスローされました
内部例外 1: ClientResultException: HTTP 400 (invalid_request_error: unsupported_parameter) Parameter: max_tokens
Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.