Open gabrielfu opened 5 months ago
It appears that there's an issue with the construction of the URL when the azure_deployment parameter is specified. The URL should not include /deployments/my-deployment/models but rather simply /models.
To fix this problem, you need to adjust the logic responsible for constructing the URL. Here's how you can modify it:
class AzureOpenAI: def init(self, azure_endpoint, azure_deployment, api_key, api_version): self.azure_endpoint = azure_endpoint self.azure_deployment = azure_deployment self.api_key = api_key self.api_version = api_version
def list_models(self):
base_url = f"{self.azure_endpoint}/openai"
if self.azure_deployment:
url = f"{base_url}/models"
else:
url = f"{base_url}/deployments/{self.azure_deployment}/models"
url += f"?api-version={self.api_version}"
# Make the request using the constructed URL
# (code for making the request goes here)
With this adjustment, the URL will be constructed correctly based on whether azure_deployment is specified. The models endpoint will be used if azure_deployment is not specified, and the deployments/my-deployment/models endpoint will be used otherwise. This should resolve the NotFoundError issue you encountered.
@kingofsoulss are you a maintainer of this package? I'm asking for this package bug to be fixed, not asking how to do the request myself. Thanks for your input though.
Thank you for the report @gabrielfu, we'll look into this soon!
With the current implementation this is somewhat intentional. The idea is that you should only be setting azure_deployment
at the client level if you're making calls to the deployment endpoints but I can see how this would be confusing.
For now you can workaround this by not setting the deployment at the client level.
@RobertCraigie I'd like to look at this and see if we can be smarter in the Azure client by not appending the /deployments/{deployment_name}
to the URL if a non-deployments endpoint is being called. We made a fix for this in the new node Azure client by adding the deployment in the build_request method instead of appending to the base_url, but I think we'll need to be a bit more careful in Python as to not introduce a breaking change (i.e. to not change the value of client.base_url
).
@RobertCraigie Thanks for the reply. Does that mean I need to instantiate two clients, one with azure_deployment
and one without, if I need both endpoints? This sounds rather unintuitive and hard to manage. I think what @kristapratico mentioned provides a better developer experience
@gabrielfu No need for 2 clients, just don't set azure_deployment
at the client level if you plan to call non-deployment and deployment-based endpoints. You can pass your deployment per method using model
, like:
client = openai.AzureOpenAI(
azure_endpoint=...,
api_key=...,
api_version=...,
)
client.models.list()
client.chat.completions.create(
model="<your-chat-deployment-name>",
messages,
)
Thanks @kristapratico, works for me now. Please feel free to close the issue if no change is planned
@kristapratico do you still plan to take a crack at this as outlined in https://github.com/openai/openai-python/issues/1397#issuecomment-2108683568 ?
@rattrayalex yes, I still intend to take a look at this (hopefully this week).
Confirm this is an issue with the Python library and not an underlying OpenAI API
Describe the bug
When using AzureOpenAI with
azure_deployment
specified and callingclient.model.list()
, the request fails withUpon investigation, this is because the request url ended up being
while the correct one should be
To Reproduce
Code snippets
No response
OS
macOS
Python version
Python v3.10.11
Library version
openai v1.26.0