Closed mahimairaja closed 1 year ago
Yes - take a look at https://sdk.vercel.ai/docs/api-reference/ai-stream.
Thanks :)
I Had create a gist example
Running AI models with FastAPI and Vercel AI SDK
2023-10-20.22-57-29.mp4
Very nice. well done. I am curious to see the same implementation using rather route.ts
.
I Had create a gist example
2023-10-20.22-57-29.mp4
Very nice. well done. I am curious to see the same implementation using rather
route.ts
.
What do you mean with route.ts
? The vercel's ai
package already have server-side helpers to make easier to stream data from any node.js app.
You can have a look in
And also this another example that I made in Typescript with uses Fastify
and Vite-React
Im my point of view, vercel's ai SDK can be used with any server, since it returns a SSE response of the generated text.
Here is an example on how to do that with route.ts
and useChat
// app/api/copilot/route.ts
import { AIStream, AIStreamParser, StreamingTextResponse } from 'ai';
export const runtime = 'edge';
function parseMyStream(): AIStreamParser {
return (data) => {
return data.replace(/^"|"$/g, '').replace(/\\n/g, '\n');
};
}
export function MyStream(res: Response, cb?: any): ReadableStream {
return AIStream(res, parseMyStream(), cb);
}
export async function POST(req: Request) {
const { messages } = await req.json();
const currentMessageContent = messages[messages.length - 1].content;
const apiUrl = 'http://localhost:8000/rag-chroma-private/stream';
const fetchResponse = await fetch(apiUrl, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
// Include other headers as required by your external API
},
body: JSON.stringify({
input: currentMessageContent,
config: {},
}),
});
const myStream = MyStream(fetchResponse, {
onStart: async () => {
console.log('Stream started');
},
onCompletion: async (completion) => {
console.log('Completion completed', completion);
},
onFinal: async (completion) => {
console.log('Stream completed', completion);
},
});
return new StreamingTextResponse(myStream);
}
// Copilot.tsx
export default function Copilot() {
const { messages, input, handleInputChange, handleSubmit, error, isLoading } = useChat({
api: '/api/copilot',
});
return (
<form
onSubmit={handleSubmit}
style={{
padding: '2rem',
height: '100%',
backgroundColor: '#f5fcff',
}}
>
<h2 className="">Ask me anything!</h2>
<div>
<TextField
variant="outlined"
disabled={isLoading}
value={input}
onChange={handleInputChange}
/>
<LoadingButton
loading={isLoading}
type="submit"
>
Submit
</LoadingButton>
</div>
{messages
.filter((f) => f.role === 'assistant')
.reverse()
.map((m) => {
const content = m.content
.replace(/{"run_id.*}/g, '')
.replace(/\\"/g, '"');
let json = content;
try {
json = JSON.parse(content);
} catch (error) {
console.log(error);
}
return (
<div
key={m.id}
>
<div>{json.answer ? <>{json.answer}</> : <>{content}</>}</div>
<div>
<Tooltip title={json.quote ?? ''}>
<IconButton>
<Iconify icon="mdi:information-outline" />
</IconButton>
</Tooltip>
</div>
</div>
);
})}
</form>
);
}
Thank you Yassine. You highlighted well the importance of understand the parser, which takes the response from your fastapi request, and convert it to the AIStream object.
I have my fine-tuned LLM on my local machine and have hosted it with Fastapi on localhost. Is there any way to test the llm with vercel ai sdk?