Open woutkonings opened 2 days ago
@woutkonings when passing azure_deployment
into the AsyncAzureOpenAI constructor, the client will use that deployment for all requests. Any value provided for model
will be ignored.
For chat completions, you can try adjusting temperature
or provide a seed
to get more deterministic results, but it is not guaranteed. Here's some docs that explain those two parameters: https://learn.microsoft.com/azure/ai-services/openai/reference-preview#createchatcompletionrequest
Confirm this is an issue with the Python library and not an underlying OpenAI API
Describe the bug
When running the AzureOpenAI service with azure_deployment gpt-4o 2024-05-13 I get very different completion results when I enter model='gpt-4' and model='gpt-4o'. I would expect the model parameter not to matter in this case, as there is only one model to choose from the given deployment.
To Reproduce
Run and observe stream:
twice:
The first deployment will give markdown heavy response, structured with bold titles etc., while the second response will give solely paragraphs.
Code snippets
No response
OS
python:3.12-slim
Python version
python v.3.12
Library version
openai v.1.51.2