Open rpostulart opened 12 hours ago
You're absolutely right, thanks for the feedback.
This work in progress PR should cover what you're looking for. We expect to get it onto the docs site shortly.
Specifically, this section addresses how to include images.
Customizing the message content
sendMessage()
accepts a object type with acontent
property that provides a flexible way to send different types of content to the AI assistant.Image Content
Use
image
to send an image to the AI assistant. Supported image formats arepng
,gif
,jpeg
, andwebp
.const { data: message, errors } = await chat.sendMessage({ content: [ { image: { format: 'png', source: { bytes: new Uint8Array([1, 2, 3]), }, }, }, ], });
Mixing
text
andimage
in a single message is supported.const { data: message, errors } = await chat.sendMessage({ content: [ { text: 'describe the image in detail', }, { image: { format: 'png', source: { bytes: new Uint8Array([1, 2, 3]), }, }, }, ], });
Conversation routes don't support sending documents. If that's something you'd like to see, please open a feature request in the https://github.com/aws-amplify/amplify-category-api repository. Thanks!
If you are using React, you can also use the hooks by themselves and build your own UI. We can probably make that clearer in the docs.
import { generateClient } from "aws-amplify/api";
import { Schema } from "../amplify/data/resource";
import { createAIHooks } from "@aws-amplify/ui-react-ai";
const client = generateClient<Schema>({ authMode: "userPool" });
const { useAIConversation, useAIGeneration } = createAIHooks(client);
export default function App() {
const [
{
data: { messages },
isLoading,
},
handleSendMessage,
] = useAIConversation('chat');
// render your own UI here with message, isLoading, and handleSendMessage
}
Thanks this is helpful.
I see that PDF is indeed not supported yet with Bedrock but coming soon:
Supported platforms and models PDF support is currently available on both Claude 3.5 Sonnet models (claude-3-5-sonnet-20241022, claude-3-5-sonnet-20240620) via direct API access. This functionality will be supported on Amazon Bedrock and Google Vertex AI soon.
I will already make a request.
Describe the content issue: I miss clear examples / pages how to use the AI kit in my own UI instead of using only.
I have solved it partially, but face errors when attaching attachments like images of pdf's.
URL page where content issue is: https://docs.amplify.aws/nextjs/ai/concepts/streaming/
@atierian are you able to share the docs here already?