Open rohanthacker opened 1 week ago
Thanks for the issue! I think supporting the Azure AI Model Inference API would be great! I think it makes sense to have it as a separate model client that also implements the ChatCompletionClient
protocol makes sense.
We'd love if you're interested in helping build this!
@jackgerrits I'll be happy to implement these changes, can this task be assigned to me as I have already started work on this task in my fork of this repository. I'll raise a draft pull request in a day or so for us to discuss.
@rohanthacker - this is supported in dotnet now with https://github.com/microsoft/autogen/pull/3790
What feature would you like to be added?
I would like the ability to use a model that is deployed on Azure AI Studio and uses the Azure AI Model Inference API.
If needed, I would like to assist in the creation of the feature. However I have a few questions and require some help about what would be the best way to implement this feature.
Questions:
AzureOpenAIChatCompletionClient
?I have already tried to do this however the API produces an invalid URL and responds with a 404 error, as the endpoint created by Azure AI Studio and the client are not the same.
Looking forward to discussing more on this
Why is this needed?
Azure AI Studio provides a large catalog of models along with various deployment options that make it easy for developers to access a wide variety of models. Given the nature of this project, having the ability to integrate this diverse set of models out of the box will allow for more adoption of the project and allow developers to bring their own model in without the need to code a new client for each.