Closed hvri5h closed 8 months ago
this feature is ideal, have tried many fixes to no avail on Vercel
I have this issue but with a different architecture.
I have FastAPI server setup with SSE hosted on Cloud Run. This is streaming to my edge function that then streams to the client. This is working fine locally and even when its hosted I can log the tokens being streamed into the edge function and view each token log in the vercel dashboard. However, the stream starts in the client for the first token and then stops... Super weird. This only happens when its being hosted.
I believe your issues is to do with the Edge Runtime not being compatible with the pinecone library. You may need to change the setup.
the Pinecone SDK is Node-only 😥 https://github.com/hwchase17/langchainjs/issues/1055
I have the same problem in prod
Hey @haritiruna ! Have you managed to solve you problem? Also, could you please share index.ts' s handleSubmit
method?
@felipetodev - I think neither the pinecone sdk nor the openai client work on edge runtime. You can create a replacement client pretty easily though using their REST APIs. I verified this works in production on Vercel. It might be their clients' use of Axios.
const searchEmbeddings = async (query: string, maxResponses = 2, minConfidence = 0.8) => {
try {
const embeddingResult = await openai.createEmbedding({
model: 'text-embedding-ada-002',
input: query,
});
const queryVector = embeddingResult.data[0].embedding;
const res = await fetch(
`https://${pineconeIndexName}-${pineconeProjectID}.svc.${pineconeEnvironment}.pinecone.io/query`,
{
method: 'POST',
headers: {
'Api-Key': `${pineconeAPIKey}`,
Accept: 'application/json',
},
body: JSON.stringify({
vector: queryVector,
includeValues: false,
includeMetadata: true,
namespace: pineconeNamespace,
topK: 2,
}),
}
);
const data = await res.json();
return data.matches
.filter(m => m.score > minConfidence)
.map(m => {
return {
text: m.metadata.text,
score: m.score,
};
});
} catch (err) {
//log an error
return [];
}
};
const createChatCompletion = async (options: {
model: string;
messages: Array<{ content: string; role: ChatCompletionRequestMessageRoleEnum; name: string }>;
max_tokens: number;
temperature: number;
stream: boolean;
}) => {
return fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${apiKey}`,
},
body: JSON.stringify(options),
});
};
For anyone else coming in from google, I've been struggling with this problem with what seems for forever and none of the answers/responses from google, stack overflower (or even this thread) has worked.
I found success by adding the following headers to my response, if you want to understand why, just search up the X-Content-Type-Options: nosniff header.
return new Response(stream, {
headers: {
'Content-Type': 'text/event-stream',
'X-Content-Type-Options': 'nosniff'
}
});
For anyone else coming in from google, I've been struggling with this problem with what seems for forever and none of the answers/responses from google, stack overflower (or even this thread) has worked.
I found success by adding the following headers to my response, if you want to understand why, just search up the X-Content-Type-Options: nosniff header.
return new Response(stream, { headers: { 'Content-Type': 'text/event-stream', 'X-Content-Type-Options': 'nosniff' } });
This didn't really work. I'm doing this right now, but no luck.
In my case, the edge-runtime functions works locally, but doesn't iterate over the res.
// Enable edge runttime
export const runtime = "edge";
export async function POST(req: Request) {
const encoder = new TextEncoder();
const decoder = new TextDecoder();
const { messages, currentTerminal, user_id } = await req.json();
console.log("Current Terminal: ", currentTerminal);
const res = await fetch(URL_GOES_HERE_FOR_FASTAPI_SERVER, {
method: "POST",
headers: {
"Content-Type": "application/json",
"X-Content-Type-Options": "nosniff",
},
body: JSON.stringify({
// Body
}),
});
console.log("Fetching data");
const transformStream = new TransformStream({
async transform(chunk, controller) {
const content = decoder.decode(chunk);
controller.enqueue(encoder.encode(content));
},
});
const readableStream = await new ReadableStream({
async start(controller) {
console.log("Starting streaming response")
// It doesn't iterate over the body here.
for await (const chunk of res.body as any) {
console.log("Chunk: ", decoder.decode(chunk));
controller.enqueue(chunk);
}
// controller.close();
},
async pull(controller) {
controller.close();
},
});
return new Response(
readableStream.pipeThrough(transformStream),
{
headers: {
"Content-Type": "text/event-stream",
"X-Content-Type-Options": "nosniff",
},
}
);
}
See in the readable stream, where you iterate, it fails to iterate, therefore, there's an error.
Hi, @haritiruna. I'm Dosu, and I'm helping the gpt4-pdf-chatbot-langchain team manage their backlog. I wanted to let you know that we are marking this issue as stale.
From what I understand, the issue is related to streaming not working when the project is deployed using Vercel edge functions. It seems that the use of a Node.js API, specifically process.nextTick
, is not supported in the Edge Runtime. Some suggestions have been made, such as changing the setup to use REST APIs instead of the Pinecone SDK. Additionally, one user shared a solution involving adding specific headers to the response. However, another user reported that this solution did not work for them and shared their code where the iteration over the response body fails.
Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the gpt4-pdf-chatbot-langchain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.
Thank you for your understanding and contribution to the project.
For anyone else coming in from google, I've been struggling with this problem with what seems for forever and none of the answers/responses from google, stack overflower (or even this thread) has worked.
I found success by adding the following headers to my response, if you want to understand why, just search up the X-Content-Type-Options: nosniff header.
return new Response(stream, { headers: { 'Content-Type': 'text/event-stream', 'X-Content-Type-Options': 'nosniff' } });
Running Vercel Edge locally, adding this did indeed do the trick for me. Thank you sir.
@mayooear Could you please help @haritiruna with this issue? They have indicated that the problem is still relevant and have shared a potential solution involving adding specific headers to the response. Thank you!
Hi, @haritiruna,
I'm helping the gpt4-pdf-chatbot-langchain team manage their backlog and am marking this issue as stale. It seems like you encountered an error when trying to initialize the Pinecone client, which is not supported in the Vercel Edge Runtime. There have been suggestions from other users to use REST APIs instead of the Pinecone SDK and to add specific headers to the response.
Could you please confirm if this issue is still relevant to the latest version of the gpt4-pdf-chatbot-langchain repository? If it is, please let the gpt4-pdf-chatbot-langchain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.
Thank you!
I got streaming to work using an older commit of this repo and everything works fine locally. However, when I deploy the app in vercel, it doesn't stream the responses anymore.
I believe I have to use edge functions in order to do that and followed this tutorial to convert the current
chat.ts
API route into an edge function but I'm getting the following error while initialising the pinecone client:Any ideas as to why this or happening? Or any other suggestions to deploy this code and make streaming work?
Here is my code:
chat.ts
stream.ts