Closed jhakulin closed 5 months ago
@kristapratico thoughts?
@jhakulin and I discussed this a bit offline, but I'll add my thoughts here. The openai
SDK has support for two different services - AOAI and OAI. Anywhere these two services diverge can bubble up as differences in the library, although considerable effort was (and still is) made to minimize this as much as possible. Feedback like this is valuable, and I suggest you bring this up at the service level as well.
One of the biggest differences between the services is that AOAI introduced the concept of model deployments whereas OAI uses the model name. In order to minimize the delta between the code you need to write for OAI vs AOAI, the Azure client accepts the deployment name as the model
. This differs from the old library where you could pass model
or deployment_id
, but this same design would not work well in the redesigned v1.X. While conceptually it doesn't feel great conflating model and deployment, one benefit is that it does reduce the friction with cookbooks/example code where one can just swap client types to switch between OAI and AOAI (for the most part).
We try to document in all example code that deployment should be passed as model in the MS Learn documentation, but it's very possible we can do better. As you point out, the OAI API ref is a great place to understand how to use the APIs, but it's not entirely accurate for AOAI. AOAI-specific docs can be found on MS Learn. Currently OAI and AOAI docs are split, but perhaps one day we could have more unified story on documentation.
Smart! I bet OAI might consider a PR to their OAS that adds a note to the model
param that for Azure, it's a deployment ID. WDYT?
Or actually, @RobertCraigie I wonder if we can/should add this manually in the python library easily?
Yeah putting this manually in the Python SDK shouldn't be too tricky.
@rattrayalex @RobertCraigie I think that's a great idea. Clarity on the param would definitely help. :)
Great! To be clear, I'd recommend opening that PR on openai-openapi
regardless of what we do in the SDK, since then it'd also show up in the reference docs. Let us know here if that doesn't get approved.
Got it, thanks! Opened the PR here: https://github.com/openai/openai-openapi/pull/186
@rattrayalex @kristapratico Thank you!
Confirm this is an issue with the Python library and not an underlying OpenAI API
Describe the bug
Following are examples of chat completions create and assistants create calls:
response = client.chat.completions.create( model=model, messages=messages )
assistant = client.beta.assistants.create( name=name, instructions=instructions, tools=tools, model=model, file_ids=file_ids )
In both API methods, the model parameter is specified as model: ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API.
The problem is that if client has been created with AzureOpenAI() constructor, the required model parameter value is not actually model id but model deployment name (from Azure Portal)
This is problematic because:
client.models.list()
, it provides model ids that works OK with OpenAI client, but not with AzureOpenAI client (unless the deployment name is the same as model id)The problem can be avoided with AzureOpenAI if model deployment name and model id are the same, but it is not clear for user always to do that.
To Reproduce
client.models.list()
APICode snippets
No response
OS
Windows
Python version
Python v3.12.1
Library version
openai v1.7.0