supabase / edge-runtime

A server based on Deno runtime, capable of running JavaScript, TypeScript, and WASM services.
MIT License
680 stars 62 forks source link

Edge function does not execute in parallel #121

Closed candymandev closed 1 year ago

candymandev commented 2 years ago

I have an edge function generate-image. When I invoke the function locally many times, each invocation runs in parallel as I expect.

local

When I deploy the function to production with the same code to invoke it many times, it seems to run sequentially. Something on the server side seems to be limiting the function to run only one execution at a time.

production

According to the function metrics in Supabase, the function executes in less than a second. The duration of the invocations is pretty consistent. So it seems that although the HTTP requests are initiated properly, something on the server side is causing each subsequent invocation to start only when the last one completes.

screenshot-2022-06-11-21-12-25

My expectation is that these functions would run completely in parallel up to the limits of my pro plan. Having these functions run sequentially misses out on one of the biggest benefits of serverless edge functions in my opinion. Is this a limitation of my plan? Is there a way to improve the performance of my function?

candymandev commented 2 years ago

For comparison, here is the same edge function running on Deno Deploy. The invocations execute in parallel as I expect.

screenshot-2022-06-13-11-21-40

They take much longer and I'm hitting memory limits on Deno Deploy when I do a lot of these... but that's another issue altogether :joy:

laktek commented 2 years ago

@candymandev Sorry for the late reply on this issue. Looks like the wait times are in function relay. We will need to investigate what's causing it.

julian-amaya commented 1 year ago

Do we have any update on this issue?

ChuckJonas commented 1 year ago

@laktek is there any update on this? Running into this on local dev (and was hoping it was just an issue with the lower env), but the fact this is actually a production issue is a major concern and will impact an upcoming planned production release using targeting supabase

ChuckJonas commented 1 year ago

Update: Looks like this is just an issue with local dev now?

I created this simple function to test

import { serve } from "https://deno.land/std@0.168.0/http/server.ts";

console.log("Hello from Functions!");

serve(async (req) => {
  const { name } = await req.json();
  const data = {
    message: `Hello ${name}!`,
  };

  await new Promise((resolve) => setTimeout(resolve, 10000));

  return new Response(JSON.stringify(data), {
    headers: { "Content-Type": "application/json" },
  });
});

If it hit it three times in rapid succession... Locally, my response times are ~10s, ~20s, ~30s.

Ran the same test and production and the response times were all ~10s (thank god).

This is still an issue though, as it make developing scenario's that require concurrent requests very challenging.

laktek commented 1 year ago

@ChuckJonas Thanks for reporting it! Transferring to edge-runtime repo to further investigate.

github-actions[bot] commented 1 year ago

:tada: This issue has been resolved in version 1.5.1 :tada:

The release is available on GitHub release

Your semantic-release bot :package::rocket:

AndryHTC commented 3 months ago

I'm using version 1.5.5, and functions are not being executed in parallel. Only the first one is processed, while the others remain pending. If not executed within a few seconds, they receive a 503 error. The following log appears in the function serve CLI

InvalidWorkerCreation: worker did not respond in time
    at async UserWorker.create (ext:sb_user_workers/user_workers.js:144:15)
    at async Object.handler (file:///root/index.ts:147:22)
    at async respond (ext:sb_core_main_js/js/http.js:163:14) {
  name: "InvalidWorkerCreation"
}
tipo122 commented 1 month ago

I encountered the same issue, but it was resolved by changing the policy in config.toml under [edge_runtime] from oneshot to per_worker. It worked for me after that.

[edge_runtime]
policy = "per_worker"