Closed gardner closed 4 months ago
Hi,
We need to update our SDK and examples to explain how to make it work with Next.js. In the meantine, here is an example that works well with Next.js App router and Lunary:
lunary.init({ appId: "..." });
const openai = new OpenAI({
apiKey: "sk-...",
});
monitorOpenAI(openai);
export const runtime = "edge";
export async function GET() {
const result = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
temperature: 0.9,
stream: true,
messages: [
{ role: "system", content: "You are an helpful assistant" },
{ role: "user", content: "Print a random string" },
],
});
const stream = iteratorToStream(result);
return new Response(stream);
}
function iteratorToStream(iterator: any) {
const encoder = new TextEncoder();
return new ReadableStream({
async pull(controller) {
try {
const { value, done } = await iterator.next();
if (done) return controller.close();
const bytes = encoder.encode(JSON.stringify(value) + "\n");
controller.enqueue(bytes);
} catch (error) {
controller.error(error);
}
},
});
}
This is still an issue:
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
})
const res = await openai.chat.completions.create({
model: 'gpt-4-0125-preview',
user: userId,
messages,
temperature: 0.7,
stream: true
})
res
is of type Stream<OpenAI.Chat.Completions.ChatCompletionChunk>
const openai = monitorOpenAI(
new OpenAI({
apiKey: process.env.OPENAI_API_KEY
})
)
const res = await openai.chat.completions.create({
model: 'gpt-4-0125-preview',
user: userId,
messages,
temperature: 0.7,
stream: true
})
res
is of type OpenAI.Chat.Completions.ChatCompletion
As a workaround:
It seems to work in spite of the type error. Adding @ts-ignore
allows the build stage to pass and then the code operates as expected. e.g.
// @ts-ignore
const stream = OpenAIStream(res, {
async onCompletion(completion) {
It looks like there are some pretty gnarly type-gymnastics being done inside the openai
npm module.
Hi @BigStar-2024, yes you can
Regarding the types, everything seems to be working fine for me:
Although I did update the code to use the openai openai.streaming.Stream
object's tee
function if possible so the returned value of the create
function is the same regardless of if you wrap OpenAI
with monitorOpenAI
or not.
I'll write some tests for it then submit a PR
With Lunary and streaming
With Lunary and without streaming
Without Lunary but with streaming
Without Lunary and without streaming
Hi there, just reporting this in case it's an easy fix. It looks like vercel's
ai
npm package wraps things up in a weird way that isn't compatible with Lunary.This can be reproduced by cloning the ai-chatbot and wrapping the openai call with
monitorOpenAI
in./app/api/chat/route.ts