quirrel-dev / quirrel

The Task Queueing Solution for Serverless.
https://quirrel.dev
MIT License
885 stars 67 forks source link

Vercel Long Running task (edge function) #1151

Open Skn0tt opened 1 year ago

Skn0tt commented 1 year ago

Discussed in https://github.com/quirrel-dev/quirrel/discussions/1150

Originally posted by **nilooy** June 25, 2023 is it possible to use quirrel queue with vercel edge function? i was looking specifically for this to run as background job by quirrel https://github.com/inngest/vercel-ai-sdk/blob/main/examples/next-openai/app/api/chat/route.ts i tried the following approach ```js import { Queue as TestQueue } from "quirrel/next"; import { Configuration, OpenAIApi } from "openai-edge"; import { OpenAIStream, StreamingTextResponse } from "ai"; export const runtime = "edge"; const config = new Configuration({ apiKey: process.env.OPENAI_API_KEY, }); const openai = new OpenAIApi(config); // @ts-ignore export default TestQueue("api/test", async (params) => { const response = await openai.createChatCompletion({ model: "gpt-3.5-turbo", stream: true, messages: [{ role: "user", content: "explain the next js" }], }); const stream = OpenAIStream(response); // Respond with the stream return new StreamingTextResponse(stream); }); ``` and ran from another route ```js await TestQueue.enqueue({ test: 123 }); ``` this results in following error while running ```bash 👟Executing job queue: /api/test id: 7f0226c0-4824-4671-9efa-e926484e95ae body: {"test":123} error - node_modules/quirrel/dist/esm/src/client/enhanced-json.js (13:0) @ Module.parse error - Unexpected token o in JSON at position 1 null ``` > WORKED PERFECTLY WITHOUT `edge`
Skn0tt commented 1 year ago

In https://github.com/quirrel-dev/quirrel/discussions/1150#discussioncomment-6278830, @nilooy mentions that it works when switching to quirrel/next-app.

Skn0tt commented 1 year ago

Hi @nilooy! According to the Next.js docs, switching to runtime = "edge" changes the API of API Routes completely. You won't be able to use runtime = "edge" in conjunction with quirrel/next, which you already found out. Now you're mentioning that with quirrel/next-app, "streaming doesn't work". Can you go into more detail on that? What exactly doesn't work? Is it related to Quirrel? A reproduction case for that would be lovely.

nilooy commented 1 year ago

In #1150 (comment), @nilooy mentions that it works when switching to quirrel/next-app.

Thanks for switching it to issue. the error mentioned was gone after switching back to quirrel/next-app from quirrel/next but i'm not sure how to make the streaming work.

so, as far i understood, quirrel calls the api back on the scheduled time (or as background task) which is hosted in vercel and has a timeout limit of 60s but i need a long running task. in the code block below, i want to keep this process on until open ai finishes it's response's stream. main goal here is to, bypass the vercel timeout limit of 60s which can be extended to higher number by streaming the response.

export default TestQueue("api/test", async (params) => {
  const response = await openai.createChatCompletion({
    model: "gpt-3.5-turbo",
    stream: true,
    messages: [{ role: "user", content: "explain the next js" }],
  });

  const stream = OpenAIStream(response);
  // Respond with the stream
  return new StreamingTextResponse(stream);
});

mentioned in vercel stream docs: https://vercel.com/docs/concepts/functions/edge-functions/streaming

Edge Functions must begin sending a response within 30 seconds to fall within the maximum initial response time. Once a reply is made, the function can continue to run. This means that you can use Edge Functions to stream data, and the response will be delivered as soon as the first chunk of data is available.

Skn0tt commented 1 year ago

I don't think that's currently possible with Quirrel. You're returning a StreamingTextResponse, but Quirrel isn't accessing that return value in any way. It's always returning "OK" as the response body:

https://github.com/quirrel-dev/quirrel/blob/139c74c42bdbe27f9b9f768dd0a13b55c1f0ab3d/src/client/index.ts#L720

Before we think about solving this, please elaborate on your usecase for this. What's the reason you're accessing OpenAI from a Queue, where you can't send data to your frontend?

nilooy commented 1 year ago

the main usecase here is, i'm making a open ai call with quite a large prompt and then parsing the data into json, in total process can take upto 2 min. i can't let my users wait for that long in the frontend. so i must take the path of fan out jobs and let the user know when it's done instead of them keeping in the page for 2 mins.

at the moment there's few talks but not valid solution except defer or a custom one.

some info: https://www.reddit.com/r/nextjs/comments/uhhmga/best_way_to_deal_with_long_background_jobs_when/ https://github.com/vercel/next.js/discussions/34266

Skn0tt commented 1 year ago

Makes sense, thank you!

I don't think that Quirrel currently supports that, and i'd need to think a bit about the best way of implementing support for these long-running jobs. Have you looked into https://www.inngest.com/, would something like that solve your needs?

nilooy commented 1 year ago

Ok perfect, inngest has a very different way to solve this issue, but with that, i need to change my entire workflow of next.js api, and will get into strong vendor lock in. I have another way i tested with gcp function, probably will go with that or make a node.js server with bull mq. i will check back later if you have any solutions in mind with quirrel. by the way, Quirrel is really good. Love it.

nilooy commented 1 year ago

@Skn0tt can you please rename the title of the issue to Vercel Long Running task (edge function) which might bring interest from other people who has same needs.

it doesn't allow me to change it.