Closed kirk-marple closed 3 months ago
The Vision API is already supported. See https://github.com/openai/openai-dotnet/blob/main/examples/Assistants/Example05_AssistantsWithVision.cs. Also, the image URL syntax is supported in Azure.AI.OpenAI v1.0.0-beta.17 and later. See the changelog entry at https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/openai/Azure.AI.OpenAI/CHANGELOG.md#features-added-3.
Hi @kirk-marple. Thank you for opening this issue and giving us the opportunity to assist. We believe that this has been addressed. If you feel that further discussion is needed, please add a comment with the text "/unresolve" to remove the "issue-addressed" label and continue the conversation.
This issue was from 8 months ago :) We are using the API for it now, appreciate the team getting it supported.
Library name
Azure.AI.OpenAI
Please describe the feature.
Saw that 1.0.0-beta.9 just dropped, and was looking for support for GPT-4 Vision.
I searched issues, and don't see anything else tracking this.
Do you expect to support the new 'content' syntax for providing image URLs to the model?
ChatMessage only takes a string for Content right now, it appears.
https://platform.openai.com/docs/guides/vision
For example: