Closed rambalachandran closed 3 weeks ago
This has been a work-in-progress. So, not yet.
The main blocker is that Azure handles models differently. You get a separate endpoint for each model. So, some code refactoring is needed (including the UI). I'll try to get this finished off in the next month or so.
If you urgently want to use another service, you can use one of the "openai proxies" available on github, where you can use any service (Anthropic, Azure, etc) and the proxy converts API requests between the OpenAI format and the Azure/Anthropic/etc format. In this extension you would change the apiBaseUrl
config setting to point to your personal proxy server. You would have to host that yourself though and may have bugginess I probably can't help with.
Thank you @Christopher-Hayes . I will give the openai proxies a try for now. Happy to test the Azure implementation once you think it is ready.
All that's really needed here is the ability to:
api-key
header instead of Authorization: "Bearer
?api-version={{api-version}}
The extension should at least then work for /chat/completion
use cases... Which I imagine is pretty much everything at this point? Even co-pilot is apparently using GPT-4 for it's code completions now
Azure OpenAI API support added in v3.24.0
. For Azure, you need to specify the "endpoint", the "deployment ID", and the "API version".
More info in the release discussion: #74
Describe the feature
Given the current turmoil at OpenAI, can we change the
Api Base Url
in settings to Azure endpoint and will it work? Will like to get a confirmation before creating an Azure account.