Closed marcusschiesser closed 4 months ago
Here's an example using GPT4 Vision that demonstrates this issue: https://github.com/marcusschiesser/ai-chatbot
That would be great if it could be implemented.
+1 recommend this be included in the template. llamaindex's template has multimodal integrated already, but vercel ai sdk has more useful features from an AI web dev perspective
You can refer to https://github.com/copilot-is/copilot.is/commit/ca8ff208976bd99af0e27cea8ce12105a1b24b37 vision and image support.
+1, similar ask for useAssistant with image support. Minimal it will be great to send the message as data type if it's a image_file
type from here: https://github.com/vercel/ai/blob/main/packages/core/streams/assistant-response.ts#L90-L94
+1 I'm working on an AI chatbot right now and needed this feature. Had to settle for using gpt-4-turbo to transcribe the image and then pass that transcription as input to the chatbot, which is pretty unideal :.
Hello everyone!
I wanted to share with you what I did here to support displaying images in the chat! You can look at my code here. I use annotations for file_path and image_file and then display them in the chat!
https://github.com/marcolivierbouch/OpenAssistantGPT/blob/main/lib/assistant-response.ts
The AI SDK now supports attachments on messages: https://sdk.vercel.ai/docs/ai-sdk-ui/chatbot#attachments-experimental
Feature Description
712 added support for GPT4 vision, but the
messages
returned byuseChat
don't seem to return the images of an already sent message, seehttps://github.com/vercel/ai/blob/29f67bdf80a1fd8ea3a8f1c837e15d82f64b8be6/packages/core/shared/types.ts#L48-L65
Use Case
Display images from previous messages
Additional context
No response