Closed shepwalker closed 2 months ago
@shepwalker any ideas on the payload - is it still gpt-4o under the hood?
yeah, largely the same under the hood https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#request-body-2 Biggest difference on my initial scan is that OpenAI expects a model in the payload vs Azure expects it as a function of the request path (model deployment equates to a specific model under the hood).
nice, yeah exactly, but same structure for message spec and function calling it looks like (which makes sense - saw similar things with databricks/bedrock).
Sadly - it's not as simple as overriding OPENAI_HOST (or at least - I haven't figured out how to make it that simple). API is fairly different: URLs are {instance specific url}/openai/deployments/{specific model deployment}/chat/completions?api-version={required version id} I have a working version in my fork - needs tests and could probably be cleaner though.