supabase / cli

Supabase CLI. Manage postgres migrations, run Supabase locally, deploy edge functions. Postgres backups. Generating types from your database schema.
https://supabase.com/docs/reference/cli/about
MIT License
1.01k stars 198 forks source link

'supabase start' frequently fails with 'service not healthy' #778

Closed floitsch closed 1 year ago

floitsch commented 1 year ago

Bug report

Describe the bug

Running supabase start on Github builders frequently fails with 'service not healthy'.

To Reproduce

Steps to reproduce the behavior, please provide code snippets or a repository:

  1. Create a Github action workflow with supabase/setup-cli@v1 and version: latest.
  2. In the workflow start the supabase setup with supabase start.
  3. Optionally, for good measure, do this with 2 other supabase configurations (using different ports).

Expected behavior

The builder starts cleanly without errors.

Github action log

Run supabase start
  supabase start
  supabase status
  shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
Pulling images... (1/13)
Pulling images... (1/13)
Pulling images... (2/13)
Pulling images... (3/13)
Pulling images... (4/13)
Pulling images... (5/13)
Pulling images... (6/13)
Pulling images... (7/13)
Pulling images... (8/13)
Pulling images... (9/13)
Pulling images... (10/13)
Pulling images... (11/13)
Pulling images... (12/13)
Starting database...
Restoring branches...
Setting up initial schema...
Applying migration 20230105212858_initial.sql...
Seeding data supabase/seed.sql...
Starting containers...
Error: service not healthy: [supabase_storage_supabase_test supabase_pg_meta_supabase_test supabase_studio_supabase_test]
Try rerunning the command with --debug to troubleshoot the error.
Error: Process completed with exit code 1.

Unfortunately, I wasn't able to get a better log with --debug. With the debug flag, the action didn't exhibit the problem.

System information

Github builder, ubuntu-latest (22.04). See https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners#supported-runners-and-hardware-resources

Additional context

Older versions don't show the error.

This was most likely introduced by the "fix" to https://github.com/supabase/cli/issues/146 https://github.com/supabase/cli/pull/770

sweatybridge commented 1 year ago

I can increase the wait time from 10s to 20s and see how it goes. You can also exclude services from starting if they are not needed, for eg.

supabase start -x storage-api,postgres-meta,studio

If you are testing migrations only, just the database needs to be started supabase db start

kesdigital commented 1 year ago

Am experiencing a similar error, albeit on my local machine

System info

  System:
    OS: Linux 6.1 Manjaro Linux
    CPU: (4) x64 Intel(R) Core(TM) i5-4200U CPU @ 1.60GHz
    Memory: 1.22 GB / 3.73 GB
    Container: Yes
    Shell: 5.9 - /usr/bin/zsh
  Binaries:
    Node: 18.13.0 - ~/.local/share/pnpm/node
    npm: 8.19.3 - ~/.local/share/pnpm/npm
    pnpm: 7.25.0 - ~/.local/share/pnpm/pnpm
  Browsers:
    Brave Browser: 109.1.47.171
    Firefox: 108.0.1
  npmPackages:
    supabase: ^1.33.0 => 1.33.0 
  Docker version 20.10.22

When I run docker ps while supabase start is running, it shows that container from image public.ecr.aws/supabase/storage-api:v0.26.1 is unhealthy. The actual error after supabase start exits is

Error: service not healthy: [supabase_storage_supalist supabase_pg_meta_supalist supabase_studio_supalist]
Try rerunning the command with --debug to troubleshoot the error.

Extra information

This is my first time running supabase locally.

(EDIT) I have tried this on a fresh Ubuntu linode and gotten a similar error

Error: service not healthy: [supabase_studio_supalist]
Try rerunning the command with --debug to troubleshoot the error.
System:
  OS: Linux 5.15 Ubuntu 22.04.1 LTS 22.04.1 LTS (Jammy Jellyfish)
  CPU: (1) x64 AMD EPYC 7542 32-Core Processor
  Memory: 658.89 MB / 969.45 MB
  Container: Yes
  Shell: 5.1.16 - /bin/bash
Binaries:
  Node: 18.13.0 - ~/.local/share/pnpm/node
  npm: 8.19.3 - ~/.local/share/pnpm/npm
  pnpm 7.25.0 - ~/.local/share/pnpm/npm
npmPackages:
  supabase: ^1.33.0 => 1.33.0
fabiotisci commented 1 year ago

I am having the same issue. It started today for some reason. I get

Error: service not healthy: [supabase_storage_XXX storage_imgproxy_XXX supabase_pg_meta_XXX supabase_studio_XXX]
Try rerunning the command with --debug to troubleshoot the error.

if I run "docker ps"

   PORTS                                                                       NAMES
cedab4f6119b   public.ecr.aws/supabase/postgres-meta:v0.54.1        "docker-entrypoint.s…"   25 seconds ago       Up 11 seconds (health: starting)   8080/tcp                                                                    supabase_pg_meta_XXX
8113ed832671   public.ecr.aws/supabase/imgproxy:v3.8.0              "imgproxy"               35 seconds ago       Up 25 seconds (health: starting)   8080/tcp                                                                    storage_imgproxy_XXX
929285833d84   public.ecr.aws/supabase/storage-api:v0.26.1          "docker-entrypoint.s…"   44 seconds ago       Up 35 seconds (health: starting)   5000/tcp                                                                    supabase_storage_XXX
36d4778e25cd   public.ecr.aws/supabase/postgrest:v10.1.1.20221215   "/bin/postgrest"         53 seconds ago       Up 44 seconds                      3000/tcp                                                                    supabase_rest_XXX
0922b5de1e75   public.ecr.aws/supabase/realtime:v2.0.2              "/usr/bin/tini -s -g…"   59 seconds ago       Up 53 seconds (healthy)                                                                                        realtime-dev.supabase_realtime_XXX
4695c578baff   public.ecr.aws/supabase/inbucket:3.0.3               "/start-inbucket.sh …"   About a minute ago   Up 59 seconds (healthy)            0.0.0.0:54326->1100/tcp, 0.0.0.0:54325->2500/tcp, 0.0.0.0:54324->9000/tcp   supabase_inbucket_XXX
da06b56576ba   public.ecr.aws/supabase/gotrue:v2.40.1               "gotrue"                 About a minute ago   Up About a minute (healthy)                                                                                    supabase_auth_XXX
fc23b7c6b252   public.ecr.aws/supabase/kong:2.8.1                   "sh -c 'cat <<'EOF' …"   About a minute ago   Up About a minute (healthy)        8001/tcp, 8443-8444/tcp, 0.0.0.0:54321->8000/tcp                            supabase_kong_XXX
c89848b4d7fa   public.ecr.aws/supabase/postgres:15.1.0.21           "docker-entrypoint.s…"   About a minute ago   Up About a minute (healthy)        0.0.0.0:54322->5432/tcp                                                     supabase_db_XXX
Fabios-MacBook-Pro-2:XXX fabiotisci$ docker ps
CONTAINER ID   IMAGE                                                COMMAND                  CREATED              STATUS                             PORTS                                                                       NAMES
cfb45067ca0a   public.ecr.aws/supabase/studio:20221214-4eecc99      "docker-entrypoint.s…"   25 seconds ago       Up 11 seconds (health: starting)   0.0.0.0:54323->3000/tcp                                                     supabase_studio_XXX
cedab4f6119b   public.ecr.aws/supabase/postgres-meta:v0.54.1        "docker-entrypoint.s…"   39 seconds ago       Up 25 seconds (health: starting)   8080/tcp                                                                    supabase_pg_meta_XXX
8113ed832671   public.ecr.aws/supabase/imgproxy:v3.8.0              "imgproxy"               49 seconds ago       Up 38 seconds (health: starting)   8080/tcp                                                                    storage_imgproxy_XXX
929285833d84   public.ecr.aws/supabase/storage-api:v0.26.1          "docker-entrypoint.s…"   58 seconds ago       Up 49 seconds (unhealthy)          5000/tcp                                                                    supabase_storage_XXX
36d4778e25cd   public.ecr.aws/supabase/postgrest:v10.1.1.20221215   "/bin/postgrest"         About a minute ago   Up 58 seconds                      3000/tcp                                                                    supabase_rest_XXX
0922b5de1e75   public.ecr.aws/supabase/realtime:v2.0.2              "/usr/bin/tini -s -g…"   About a minute ago   Up About a minute (healthy)                                                                                    realtime-dev.supabase_realtime_XXX
4695c578baff   public.ecr.aws/supabase/inbucket:3.0.3               "/start-inbucket.sh …"   About a minute ago   Up About a minute (healthy)        0.0.0.0:54326->1100/tcp, 0.0.0.0:54325->2500/tcp, 0.0.0.0:54324->9000/tcp   supabase_inbucket_XXX
da06b56576ba   public.ecr.aws/supabase/gotrue:v2.40.1               "gotrue"                 About a minute ago   Up About a minute (healthy)                                                                                    supabase_auth_XXX
fc23b7c6b252   public.ecr.aws/supabase/kong:2.8.1                   "sh -c 'cat <<'EOF' …"   About a minute ago   Up About a minute (healthy)        8001/tcp, 8443-8444/tcp, 0.0.0.0:54321->8000/tcp                            supabase_kong_XXX
c89848b4d7fa   public.ecr.aws/supabase/postgres:15.1.0.21           "docker-entrypoint.s…"   About a minute ago   Up About a minute (unhealthy)      0.0.0.0:54322->5432/tcp  

Where several instances are unhealthy

sweatybridge commented 1 year ago

If a service container is unhealthy, it could be indicative of other issues with the container. I'm adding a flag to bypass these health checks so that you can dump the logs from those unhealthy containers. Without the logs, it will be quite difficult to figure out the root cause for this.

kesdigital commented 1 year ago

If a service container is unhealthy, it could be indicative of other issues with the container. I'm adding a flag to bypass these health checks so that you can dump the logs from those unhealthy containers. Without the logs, it will be quite difficult to figure out the root cause for this.

Running supabase start --ignore-health-check works, but these services are still unhealthy

service not healthy: [supabase_storage_* supabase_pg_meta_* supabase_studio_*]
Started supabase local development setup.

docker ps shows the container from image public.ecr.aws/supabase/studio:20221214-4eecc99 as unhealthy, but after a few minutes, it was healthy. What can I do to help pinpoint the problem?

sweatybridge commented 1 year ago

Could you help me get the logs of those containers which are unhealthy?

docker logs supabase_storage_*
kesdigital commented 1 year ago

All container are healhy now, but here is their

docker logs supabase_studio_*

> studio@0.0.9 start
> next start

ready - started server on 0.0.0.0:3000, url: http://localhost:3000
info  - Loaded env from /app/studio/.env

docker logs supabase_pg_meta_*

> @supabase/postgres-meta@0.0.0-automated start
> node dist/server/app.js

(node:244) ExperimentalWarning: Importing JSON modules is an experimental feature. This feature could change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
{"level":"info","time":"2023-01-20T08:16:06.585Z","pid":244,"hostname":"cb4d234efa4f","msg":"Server listening at http://0.0.0.0:8080"}
{"level":"info","time":"2023-01-20T08:16:06.585Z","pid":244,"hostname":"cb4d234efa4f","msg":"App started on port 8080"}
{"level":"info","time":"2023-01-20T08:16:06.899Z","pid":244,"hostname":"cb4d234efa4f","msg":"Server listening at http://0.0.0.0:8081"}
{"level":"info","time":"2023-01-20T08:16:06.900Z","pid":244,"hostname":"cb4d234efa4f","msg":"Admin App started on port 8081"}

docker logs storage_imgproxy_*

WARNING [2023-01-20T08:13:18Z] No keys defined, so signature checking is disabled 
WARNING [2023-01-20T08:13:18Z] No salts defined, so signature checking is disabled 
WARNING [2023-01-20T08:13:18Z] Exposing root via IMGPROXY_LOCAL_FILESYSTEM_ROOT is unsafe 
INFO    [2023-01-20T08:13:18Z] Starting server at :5001 
INFO    [2023-01-20T08:13:20Z] Started /health  request_id=2Kt7lVxv6PBlOYpj1y9Wt method=GET client_ip=127.0.0.1
INFO    [2023-01-20T08:13:20Z] Completed in 57.425µs /health  request_id=2Kt7lVxv6PBlOYpj1y9Wt method=GET status=200 client_ip=127.0.0.1
INFO    [2023-01-20T08:13:23Z] Started /health  request_id=hQD_zluzqcByMEC9u0jCw method=GET client_ip=127.0.0.1
INFO    [2023-01-20T08:13:23Z] Completed in 41.783113ms /health  request_id=hQD_zluzqcByMEC9u0jCw method=GET status=200 client_ip=127.0.0.1

...

docker logs supabase_storage_*

2023-01-20T08:13:21: PM2 log: Launching in no daemon mode
2023-01-20T08:13:25: PM2 log: App [server:0] starting in -fork mode-
2023-01-20T08:13:25: PM2 log: App [server:0] online
running migrations
finished migrations
{"level":"info","time":"2023-01-20T08:16:10.784Z","pid":60,"hostname":"dbf349e0ad6c","msg":"Server listening at http://0.0.0.0:5000"}
Server listening at http://0.0.0.0:5000
{"level":"info","time":"2023-01-20T08:17:29.153Z","pid":60,"hostname":"dbf349e0ad6c","reqId":"req-t","tenantId":"stub","project":"stub","results":[],"msg":"results"}
{"level":"info","time":"2023-01-20T08:17:29.160Z","pid":60,"hostname":"dbf349e0ad6c","reqId":"req-t","tenantId":"stub","project":"stub","req":{"method":"GET","url":"/bucket","headers":{"host":"supabase_storage_wavedj-ug:5000","x_forwarded_proto":"http","x_real_ip":"172.19.0.1","x_client_info":"supabase-js/2.1.1","user_agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36","accept":"*/*","referer":"http://localhost:54323/"},"hostname":"supabase_storage_wavedj-ug:5000","remoteAddress":"172.19.0.3","remotePort":54976},"res":{"statusCode":200},"responseTime":640.5188610004261,"msg":"GET | 200 | 172.19.0.3 | req-t | /bucket | Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36"}
{"level":"info","time":"2023-01-20T08:27:25.980Z","pid":60,"hostname":"dbf349e0ad6c","reqId":"req-77","tenantId":"stub","project":"stub","results":[],"msg":"results"}

...

docker logs supabase_rest_*

20/Jan/2023:08:12:55 +0000: Attempting to connect to the database...
20/Jan/2023:08:12:57 +0000: Connection successful
20/Jan/2023:08:12:57 +0000: Listening on port 3000
20/Jan/2023:08:12:57 +0000: Listening for notifications on the pgrst channel
20/Jan/2023:08:12:57 +0000: Config reloaded
20/Jan/2023:08:12:58 +0000: Schema cache loaded
20/Jan/2023:08:13:03 +0000: Schema cache loaded

docker logs realtime-dev.supabase_realtime_*

08:13:02.075 [info] == Running 20230110180046 Realtime.Repo.Migrations.AddLimitsFieldsToTenants.change/0 forward
08:13:02.329 [info] alter table tenants
08:13:02.446 [info] == Migrated 20230110180046 in 0.0s
08:13:23.473 [debug] QUERY OK db=167.4ms queue=3148.3ms idle=0.0ms
begin []
08:13:24.470 [debug] QUERY OK source="tenants" db=108.4ms
SELECT t0."id", t0."name", t0."external_id", t0."jwt_secret", t0."postgres_cdc_default", t0."max_concurrent_users", t0."max_events_per_second", t0."max_bytes_per_second", t0."max_channels_per_client", t0."max_joins_per_second", t0."inserted_at", t0."updated_at" FROM "tenants" AS t0 WHERE (t0."external_id" = $1) ["realtime-dev"]
08:13:25.902 [debug] QUERY OK source="extensions" db=0.8ms
DELETE FROM "extensions" AS e0 WHERE (e0."tenant_external_id" = $1) ["realtime-dev"]
08:13:26.158 [debug] QUERY OK db=0.6ms
DELETE FROM "tenants" WHERE "id" = $1 [<<27, 74, 46, 155, 230, 156, 64, 220, 186, 91, 51, 124, 14, 237, 44, 136>>]
08:13:27.439 [debug] QUERY OK db=0.4ms
INSERT INTO "tenants" ("external_id","jwt_secret","max_bytes_per_second","max_channels_per_client","max_concurrent_users","max_events_per_second","max_joins_per_second","name","inserted_at","updated_at","id") VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11) ["realtime-dev", "iNjicxc4+llvc9wovDvqymwfnj9teWMlyOIbJ8Fh6j2WNU8CIJ2ZgjR6MUIKqSmeDmvpsKLsZ9jgXJmQPpwL8w==", 100000, 100, 200, 100, 500, "realtime-dev", ~N[2023-01-20 08:13:27], ~N[2023-01-20 08:13:27], <<172, 173, 127, 88, 125, 24, 64, 180, 151, 105, 18, 255, 75, 12, 16, 115>>]
08:13:27.892 [debug] QUERY OK db=419.1ms
INSERT INTO "extensions" ("settings","tenant_external_id","type","inserted_at","updated_at","id") VALUES ($1,$2,$3,$4,$5,$6) [%{"db_host" => "CGSMAJs4R39ttxGDbuRZQ1Lyh29dY6HKGWWO8Qn/mJg=", "db_name" => "sWBpZNdjggEPTQVlI52Zfw==", "db_password" => "sWBpZNdjggEPTQVlI52Zfw==", "db_port" => "+enMDFi1J/3IrrquHHwUmA==", "db_user" => "sWBpZNdjggEPTQVlI52Zfw==", "ip_version" => 4, "poll_interval_ms" => 100, "poll_max_changes" => 100, "poll_max_record_bytes" => 1048576, "publication" => "supabase_realtime", "region" => "us-east-1", "slot_name" => "supabase_realtime_replication_slot"}, "realtime-dev", "postgres_cdc_rls", ~N[2023-01-20 08:13:27], ~N[2023-01-20 08:13:27], <<47, 170, 70, 2, 21, 22, 64, 93, 178, 28, 221, 203, 241, 17, 177, 48>>]
08:13:27.990 [debug] QUERY OK db=98.2ms
commit []
08:13:56.852 [notice]     :alarm_handler: {:set, {:system_memory_high_watermark, []}}
08:13:57.599 [info] Elixir.Realtime.SignalHandler is being initialized...
08:13:57.600 [notice] SYN[realtime@127.0.0.1] Adding node to scope <users>
08:13:57.600 [notice] SYN[realtime@127.0.0.1] Creating tables for scope <users>
08:13:57.600 [notice] SYN[realtime@127.0.0.1|registry<users>] Discovering the cluster
08:13:57.601 [notice] SYN[realtime@127.0.0.1|pg<users>] Discovering the cluster
08:13:57.601 [notice] SYN[realtime@127.0.0.1] Adding node to scope <Elixir.RegionNodes>
08:13:57.601 [notice] SYN[realtime@127.0.0.1] Creating tables for scope <Elixir.RegionNodes>
08:13:57.601 [notice] SYN[realtime@127.0.0.1|registry<Elixir.RegionNodes>] Discovering the cluster
08:13:57.601 [notice] SYN[realtime@127.0.0.1|pg<Elixir.RegionNodes>] Discovering the cluster
08:13:57.621 [info] Running RealtimeWeb.Endpoint with cowboy 2.9.0 at :::4000 (http)
08:13:57.621 [info] Access RealtimeWeb.Endpoint at http://realtime.fly.dev
08:13:57.622 [notice] SYN[realtime@127.0.0.1] Adding node to scope <Elixir.PostgresCdcStream>
08:13:57.622 [notice] SYN[realtime@127.0.0.1] Creating tables for scope <Elixir.PostgresCdcStream>
08:13:57.623 [notice] SYN[realtime@127.0.0.1|registry<Elixir.PostgresCdcStream>] Discovering the cluster
08:13:57.623 [notice] SYN[realtime@127.0.0.1|pg<Elixir.PostgresCdcStream>] Discovering the cluster
08:13:57.625 [notice] SYN[realtime@127.0.0.1] Adding node to scope <Elixir.Extensions.PostgresCdcRls>
08:13:57.625 [notice] SYN[realtime@127.0.0.1] Creating tables for scope <Elixir.Extensions.PostgresCdcRls>
08:13:57.625 [notice] SYN[realtime@127.0.0.1|registry<Elixir.Extensions.PostgresCdcRls>] Discovering the cluster
08:13:57.625 [notice] SYN[realtime@127.0.0.1|pg<Elixir.Extensions.PostgresCdcRls>] Discovering the cluster
08:14:00.725 [debug] Tzdata polling for update.
08:14:03.054 [info] tzdata release in place is from a file last modified Fri, 22 Oct 2021 02:20:47 GMT. Release file on server was last modified Tue, 29 Nov 2022 17:25:53 GMT.
08:14:03.055 [debug] Tzdata downloading new data from https://data.iana.org/time-zones/tzdata-latest.tar.gz
08:14:04.541 [debug] Tzdata data downloaded. Release version 2022g.
08:14:05.548 [info] Tzdata has updated the release from 2021e to 2022g
08:14:05.548 [debug] Tzdata deleting ETS table for version 2021e
08:14:05.553 [debug] Tzdata deleting ETS table file for version 2021e

docker logs supabase_inbucket_*

Installing default greeting.html to /config
{"level":"info","phase":"startup","version":"v3.0.3","buildDate":"2022-08-08T02:52:31+00:00","time":"2023-01-20T08:12:45Z","message":"Inbucket starting"}
{"level":"info","phase":"startup","module":"storage","time":"2023-01-20T08:12:45Z","message":"Retention configured for 72h0m0s"}
{"level":"info","module":"web","phase":"startup","path":"ui","time":"2023-01-20T08:12:45Z","message":"Web UI content mapped"}
{"level":"info","module":"smtp","phase":"startup","addr":"0.0.0.0:2500","time":"2023-01-20T08:12:45Z","message":"SMTP listening on tcp4"}
{"level":"info","module":"web","phase":"startup","addr":"0.0.0.0:9000","time":"2023-01-20T08:12:45Z","message":"HTTP listening on tcp4"}
{"level":"info","module":"pop3","phase":"startup","addr":"0.0.0.0:1100","time":"2023-01-20T08:12:45Z","message":"POP3 listening on tcp4"}

docker logs supabase_auth_*

{"level":"info","msg":"Go runtime metrics collection started","time":"2023-01-20T08:12:44Z"}
{"component":"pop","level":"info","msg":"Migrations already up to date, nothing to apply","time":"2023-01-20T08:12:44Z"}
{"args":[0.028376076],"component":"pop","level":"info","msg":"%.4f seconds","time":"2023-01-20T08:12:44Z"}
{"level":"info","msg":"GoTrue migrations applied successfully","time":"2023-01-20T08:12:44Z"}
{"component":"api","level":"warning","msg":"DEPRECATION NOTICE: GOTRUE_JWT_ADMIN_GROUP_NAME not supported by Supabase's GoTrue, will be removed soon","time":"2023-01-20T08:12:44Z"}
{"component":"api","level":"warning","msg":"DEPRECATION NOTICE: GOTRUE_JWT_DEFAULT_GROUP_NAME not supported by Supabase's GoTrue, will be removed soon","time":"2023-01-20T08:12:44Z"}
{"level":"info","msg":"GoTrue API started on: 0.0.0.0:9999","time":"2023-01-20T08:12:44Z"}

docker logs supabase_kong_*

2023/01/20 08:12:35 [warn] 8#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/kong/nginx.conf:6
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/kong/nginx.conf:6
2023/01/20 08:12:44 [notice] 8#0: using the "epoll" event method
2023/01/20 08:12:44 [notice] 8#0: openresty/1.19.9.1
2023/01/20 08:12:44 [notice] 8#0: built by gcc 6.4.0 (Alpine 6.4.0) 
2023/01/20 08:12:44 [notice] 8#0: OS: Linux 6.1.1-1-MANJARO
2023/01/20 08:12:44 [notice] 8#0: getrlimit(RLIMIT_NOFILE): 1073741816:1073741816
2023/01/20 08:12:44 [notice] 8#0: start worker processes
2023/01/20 08:12:44 [notice] 8#0: start worker process 1123
2023/01/20 08:12:44 [notice] 8#0: start worker process 1124
2023/01/20 08:12:44 [notice] 8#0: start worker process 1125
2023/01/20 08:12:44 [notice] 8#0: start worker process 1126
2023/01/20 08:12:44 [notice] 1124#0: *2 [lua] init.lua:260: purge(): [DB cache] purging (local) cache, context: init_worker_by_lua*
2023/01/20 08:12:44 [notice] 1124#0: *2 [lua] init.lua:260: purge(): [DB cache] purging (local) cache, context: init_worker_by_lua*
2023/01/20 08:12:44 [notice] 1124#0: *2 [kong] init.lua:426 declarative config loaded from /home/kong/kong.yml, context: init_worker_by_lua*
2023/01/20 08:12:44 [notice] 1124#0: *2 [kong] init.lua:312 only worker #0 can manage, context: init_worker_by_lua*
2023/01/20 08:12:44 [notice] 1125#0: *3 [kong] init.lua:312 only worker #0 can manage, context: init_worker_by_lua*
2023/01/20 08:12:44 [notice] 1126#0: *4 [kong] init.lua:312 only worker #0 can manage, context: init_worker_by_lua*
172.19.0.1 - - [20/Jan/2023:08:14:04 +0000] "HEAD /rest/v1/ HTTP/1.1" 200 0 "-" "Go-http-client/1.1"
172.19.0.1 - - [20/Jan/2023:08:17:08 +0000] "OPTIONS /rest/v1/ HTTP/1.1" 200 0 "http://localhost:54323/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36"
172.19.0.1 - - [20/Jan/2023:08:17:09 +0000] "HEAD /rest/v1/ HTTP/1.1" 200 0 "http://localhost:54323/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36"

...

docker logs supabase_db_*

The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... Etc/UTC
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

Success. You can now start the database server using:

initdb: warning: enabling "trust" authentication for local connections
initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.
    pg_ctl -D /var/lib/postgresql/data -l logfile start

waiting for server to start.... 2023-01-20 08:12:16.255 UTC [54] LOG:  pgaudit extension initialized
 2023-01-20 08:12:16.368 UTC [54] LOG:  pgsodium primary server secret key loaded
 2023-01-20 08:12:16.520 UTC [54] LOG:  redirecting log output to logging collector process
 2023-01-20 08:12:16.520 UTC [54] HINT:  Future log output will appear in directory "/var/log/postgresql".
. done
server started

/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/00-schema.sql
CREATE ROLE
REVOKE
CREATE SCHEMA
CREATE FUNCTION
REVOKE
GRANT

/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/01-extension.sql
CREATE SCHEMA
CREATE EXTENSION

/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/init-scripts

/usr/local/bin/docker-entrypoint.sh: sourcing /docker-entrypoint-initdb.d/migrate.sh

/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/migrations

waiting for server to shut down..... done
server stopped

PostgreSQL init process complete; ready for start up.

 2023-01-20 08:12:19.122 UTC [1] LOG:  pgaudit extension initialized
 2023-01-20 08:12:19.134 UTC [1] LOG:  pgsodium primary server secret key loaded
 2023-01-20 08:12:19.377 UTC [1] LOG:  redirecting log output to logging collector process
 2023-01-20 08:12:19.377 UTC [1] HINT:  Future log output will appear in directory "/var/log/postgresql".
kesdigital commented 1 year ago

I have noticed containers are unhealhty at first, then healthy after some time

CONTAINER ID   IMAGE                                                COMMAND                  CREATED         STATUS                     PORTS                                                                                                                                   NAMES
60c069f0fc5e   public.ecr.aws/supabase/studio:20221214-4eecc99      "docker-entrypoint.s…"   3 minutes ago   Up 2 minutes (unhealthy)   0.0.0.0:54323->3000/tcp, :::54323->3000/tcp                                                                                             supabase_studio_*
da084eb8c92c   public.ecr.aws/supabase/postgres-meta:v0.58.0        "docker-entrypoint.s…"   4 minutes ago   Up 3 minutes (unhealthy)   8080/tcp                                                                                                                                supabase_pg_meta_*
cb401acf426a   public.ecr.aws/supabase/imgproxy:v3.8.0              "imgproxy"               4 minutes ago   Up 3 minutes (unhealthy)   8080/tcp                                                                                                                                storage_imgproxy_*
781e465f167b   public.ecr.aws/supabase/storage-api:v0.26.1          "docker-entrypoint.s…"   4 minutes ago   Up 4 minutes (unhealthy)   5000/tcp                                                                                                                                supabase_storage_*
6605005fe207   public.ecr.aws/supabase/postgrest:v10.1.1.20221215   "/bin/postgrest"         4 minutes ago   Up 4 minutes               3000/tcp                                                                                                                                supabase_rest_*
9dd2bccf12ab   public.ecr.aws/supabase/realtime:v2.1.0              "/usr/bin/tini -s -g…"   4 minutes ago   Up 2 minutes (healthy)                                                                                                                                             realtime-dev.supabase_realtime_*
0ca58a518e26   public.ecr.aws/supabase/inbucket:3.0.3               "/start-inbucket.sh …"   5 minutes ago   Up 4 minutes (healthy)     0.0.0.0:54326->1100/tcp, :::54326->1100/tcp, 0.0.0.0:54325->2500/tcp, :::54325->2500/tcp, 0.0.0.0:54324->9000/tcp, :::54324->9000/tcp   supabase_inbucket_*
a38ca7171307   public.ecr.aws/supabase/gotrue:v2.40.1               "gotrue"                 5 minutes ago   Up 5 minutes (healthy)                                                                                                                                             supabase_auth_*
1a1a6664fbd8   public.ecr.aws/supabase/kong:2.8.1                   "sh -c 'cat <<'EOF' …"   5 minutes ago   Up 5 minutes (healthy)     8001/tcp, 8443-8444/tcp, 0.0.0.0:54321->8000/tcp, :::54321->8000/tcp                                                                    supabase_kong_*
294eea43393f   public.ecr.aws/supabase/postgres:15.1.0.21           "docker-entrypoint.s…"   5 minutes ago   Up 5 minutes (healthy)     0.0.0.0:54322->5432/tcp, :::54322->5432/tcp                                                                                             supabase_db_*

After a couple of minutes

CONTAINER ID   IMAGE                                                COMMAND                  CREATED          STATUS                    PORTS                                                                                                                                   NAMES
60c069f0fc5e   public.ecr.aws/supabase/studio:20221214-4eecc99      "docker-entrypoint.s…"   23 minutes ago   Up 22 minutes (healthy)   0.0.0.0:54323->3000/tcp, :::54323->3000/tcp                                                                                             supabase_studio_*
da084eb8c92c   public.ecr.aws/supabase/postgres-meta:v0.58.0        "docker-entrypoint.s…"   24 minutes ago   Up 23 minutes (healthy)   8080/tcp                                                                                                                                supabase_pg_meta_*
cb401acf426a   public.ecr.aws/supabase/imgproxy:v3.8.0              "imgproxy"               24 minutes ago   Up 24 minutes (healthy)   8080/tcp                                                                                                                                storage_imgproxy_*
781e465f167b   public.ecr.aws/supabase/storage-api:v0.26.1          "docker-entrypoint.s…"   24 minutes ago   Up 24 minutes (healthy)   5000/tcp                                                                                                                                supabase_storage_*
6605005fe207   public.ecr.aws/supabase/postgrest:v10.1.1.20221215   "/bin/postgrest"         24 minutes ago   Up 24 minutes             3000/tcp                                                                                                                                supabase_rest_*
9dd2bccf12ab   public.ecr.aws/supabase/realtime:v2.1.0              "/usr/bin/tini -s -g…"   25 minutes ago   Up 22 minutes (healthy)                                                                                                                                           realtime-dev.supabase_realtime_*
0ca58a518e26   public.ecr.aws/supabase/inbucket:3.0.3               "/start-inbucket.sh …"   25 minutes ago   Up 25 minutes (healthy)   0.0.0.0:54326->1100/tcp, :::54326->1100/tcp, 0.0.0.0:54325->2500/tcp, :::54325->2500/tcp, 0.0.0.0:54324->9000/tcp, :::54324->9000/tcp   supabase_inbucket_*
a38ca7171307   public.ecr.aws/supabase/gotrue:v2.40.1               "gotrue"                 25 minutes ago   Up 25 minutes (healthy)                                                                                                                                           supabase_auth_*
1a1a6664fbd8   public.ecr.aws/supabase/kong:2.8.1                   "sh -c 'cat <<'EOF' …"   25 minutes ago   Up 25 minutes (healthy)   8001/tcp, 8443-8444/tcp, 0.0.0.0:54321->8000/tcp, :::54321->8000/tcp                                                                    supabase_kong_*
294eea43393f   public.ecr.aws/supabase/postgres:15.1.0.21           "docker-entrypoint.s…"   25 minutes ago   Up 25 minutes (healthy)   0.0.0.0:54322->5432/tcp, :::54322->5432/tcp                                                                                             supabase_db_*
sweatybridge commented 1 year ago

3 minutes seem really long for a container to start.. it usually takes less than 10 seconds.

Is this from GitHub action runner? I suspect we are running into some resource constraints here.

kesdigital commented 1 year ago

3 minutes seem really long for a container to start.. it usually takes less than 10 seconds.

Is this from GitHub action runner? I suspect we are running into some resource constraints here.

Could be

 Disk: 27G / 452G (7%)
 CPU: Intel Core i5-4200U @ 4x 2.6GHz [50.0°C]
 GPU: Intel Corporation Haswell-ULT Integrated Graphics Controller (rev 09)
 RAM: 2952MiB / 3818MiB
sweatybridge commented 1 year ago

The standard GitHub Linux runner that we tested on has 7GB of RAM, which is more than your local machine or linode. Are you able to run on another instance with more RAM?

I suspect docker will start paging containers to disk if it runs out of physical memory, hence slowing down the start process significantly.

kesdigital commented 1 year ago

The standard GitHub Linux runner that we tested on has 7GB of RAM, which is more than your local machine or linode. Are you able to run on another instance with more RAM?

I suspect docker will start paging containers to disk if it runs out of physical memory, hence slowing down the start process significantly.

Just ran it on an 8GB RAM linode, and it started fine with no errors, I guess it is my machine's limited resources. Running the start command with --ignore-health-check fixes it for me, I will keep using that. Thanks.

fabiotisci commented 1 year ago

The standard GitHub Linux runner that we tested on has 7GB of RAM, which is more than your local machine or linode. Are you able to run on another instance with more RAM? I suspect docker will start paging containers to disk if it runs out of physical memory, hence slowing down the start process significantly.

Just ran it on an 8GB RAM linode, and it started fine with no errors, I guess it is my machine's limited resources. Running the start command with --ignore-health-check fixes it for me, I will keep using that. Thanks.

when I run "supabase start --help" I get this: Start containers for Supabase local development

Usage: supabase start [flags]

Flags: -x, --exclude strings Names of containers to not start. [gotrue, realtime, storage-api, imgproxy, kong, inbucket, postgrest, pgadmin-schema-diff, migra, postgres-meta, studio, deno-relay] -h, --help help for start

Global Flags: --debug output debug logs to stderr --experimental enable experimental features --workdir string path to a Supabase project directory

This is with supabase CLI version 1.33.0

kesdigital commented 1 year ago

The standard GitHub Linux runner that we tested on has 7GB of RAM, which is more than your local machine or linode. Are you able to run on another instance with more RAM? I suspect docker will start paging containers to disk if it runs out of physical memory, hence slowing down the start process significantly.

Just ran it on an 8GB RAM linode, and it started fine with no errors, I guess it is my machine's limited resources. Running the start command with --ignore-health-check fixes it for me, I will keep using that. Thanks.

when I run "supabase start --help" I get this: Start containers for Supabase local development

Usage: supabase start [flags]

Flags: -x, --exclude strings Names of containers to not start. [gotrue, realtime, storage-api, imgproxy, kong, inbucket, postgrest, pgadmin-schema-diff, migra, postgres-meta, studio, deno-relay] -h, --help help for start

Global Flags: --debug output debug logs to stderr --experimental enable experimental features --workdir string path to a Supabase project directory

This is with supabase CLI version 1.33.0

You have to update your CLI version, the version am using is 1.34.5

kouwasi commented 1 year ago

I have similar issue. When I start supabase services with running some heavy applications, got service not healthy error.

I'm not familiar about the supabase/cli implementation, But it looks too short to a health check timeout threshold though. (20 sec?) https://github.com/supabase/cli/blob/main/internal/start/start.go#L483-L485

I guess can not run supabase services with a cheap computer. For example, compute timeout threshold by exponential backoff as another way to ignore health check. That better than to ignore health check and more convinient.

richard-edwards commented 1 year ago

I'm having the same issue as well. If I run with --ignore-health-check all services start except for realtime-dev.supabase_realtime_test

docker ps | grep realtime gives me:

9c123f3ab76 public.ecr.aws/supabase/realtime:v2.1.0 "/usr/bin/tini -s -g…" 7 minutes ago Restarting (2) 48 seconds ago

Running supabase 1.34.5 MEMORY INFORMATION Total memory: 32051 MB Total swap: 2047 MB

sweatybridge commented 1 year ago

Is this on a new project or an existing one? Could you try to clean up with supabase stop before starting again?

If that fails, could you help me get the logs from realtime container for further investigation?

supabase start --ignore-health-check && docker logs -f realtime-dev.supabase_realtime_test
richard-edwards commented 1 year ago

@sweatybridge

This is a brand new project, just following the docs on local development setup

I am running sudo supabase start. I tried to add my user to the docker group so I could run in without sudo but it didn't seem to matter even though I logged out and back in. When I check my user grroups it does show I'm in the docker group

Full output

`` sudo supabase start --ignore-health-check && docker logs -f realtime-dev.supabase_realtime_test
[sudo] password for richard:
Error restoring main: branch was not dumped. Seeding data supabase/seed.sql... service not healthy: [realtime-dev.supabase_realtime_test] Started supabase local development setup.

     API URL: http://localhost:54321
      DB URL: postgresql://postgres:postgres@localhost:54322/postgres
  Studio URL: http://localhost:54323
Inbucket URL: http://localhost:54324
  JWT secret: super-secret-jwt-token-with-at-least-32-characters-long
    anon key: xxxxxxxxxx
service_role key: xxxxxxxxxx
/app/limits.sh: 4: ulimit: error setting limit (Operation not permitted)
/app/limits.sh: 4: ulimit: error setting limit (Operation not permitted)
/app/limits.sh: 4: ulimit: error setting limit (Operation not permitted)
/app/limits.sh: 4: ulimit: error setting limit (Operation not permitted)
/app/limits.sh: 4: ulimit: error setting limit (Operation not permitted)
/app/limits.sh: 4: ulimit: error setting limit (Operation not permitted)
/app/limits.sh: 4: ulimit: error setting limit (Operation not permitted)
/app/limits.sh: 4: ulimit: error setting limit (Operation not permitted)

``

/app/limits.sh output was what was in the log

ps docker:

324cfe4c679b public.ecr.aws/supabase/realtime:v2.1.0 "/usr/bin/tini -s -g…" 14 minutes ago Restarting (2) 20 seconds ago realtime-dev.supabase_realtime_test

sweatybridge commented 1 year ago

Thanks for the detailed description. It seems like realtime doesn't support running docker in rootless mode yet. I will ping someone to have a closer look.

github-actions[bot] commented 1 year ago

:tada: This issue has been resolved in version 1.35.0 :tada:

The release is available on:

Your semantic-release bot :package::rocket:

floitsch commented 1 year ago

I'm still seeing this issue on Github.

We are running three supabase instances on the builder, and sometimes the third one fails.

In the following screenshot we are using version 1.35.0, the third one fails after 33 seconds with the following output:

Error: service not healthy: [realtime-dev.supabase_realtime_supabase_test]

image

w3b6x9 commented 1 year ago

@floitsch give this version a try when it's deployed: https://github.com/supabase/cli/pull/889.

floitsch commented 1 year ago

I only did one run so far, but that one worked without any problem. I will update here, if I encounter it again. In the meantime this seems to be fixed. Thanks!

bautrukevich commented 1 year ago

Hello! I still have the same issue, running with --ignore-health-check, but looking in logs and:

07:29:34.378 [warning] [libcluster:fly6pn] unable to connect to :"realtime@202.3.218.138"
07:29:46.401 [warning] [libcluster:fly6pn] unable to connect to :"realtime@202.3.218.138"
07:29:58.420 [warning] [libcluster:fly6pn] unable to connect to :"realtime@202.3.218.138"
07:30:10.439 [warning] [libcluster:fly6pn] unable to connect to :"realtime@202.3.218.138"
07:30:22.479 [warning] [libcluster:fly6pn] unable to connect to :"realtime@202.3.218.138"
07:30:34.542 [warning] [libcluster:fly6pn] unable to connect to :"realtime@202.3.218.138"
07:30:46.581 [warning] [libcluster:fly6pn] unable to connect to :"realtime@202.3.218.138"
07:30:58.607 [warning] [libcluster:fly6pn] unable to connect to :"realtime@202.3.218.138"
07:31:15.305 [warning] [libcluster:fly6pn] unable to connect to :"realtime@202.3.218.138"
xHergz commented 1 year ago

I'm experiencing this issue with the supabase_studio_ container. I've tried all the solutions other people have found across the similar issues (destroying and redownloading images/containers, increasing memory resources, and ignoring health checks). It seems none of this is working as the container is stuck in a restart loop.

There isn't anything in the container logs to help:

2023-07-21 11:50:42 info  - Loaded env from /app/studio/.env
2023-07-21 11:50:42 Listening on port 3000
2023-07-21 11:51:43 info  - Loaded env from /app/studio/.env
2023-07-21 11:51:43 Listening on port 3000
2023-07-21 11:52:44 info  - Loaded env from /app/studio/.env
2023-07-21 11:52:44 Listening on port 3000
2023-07-21 12:02:41 info  - Loaded env from /app/studio/.env
2023-07-21 12:02:41 Listening on port 3000
2023-07-21 12:03:42 info  - Loaded env from /app/studio/.env
2023-07-21 12:03:42 Listening on port 3000
2023-07-21 12:04:43 info  - Loaded env from /app/studio/.env
2023-07-21 12:04:43 Listening on port 3000
2023-07-21 12:04:44 No storage option exists to persist the session, which may result in unexpected behavior when using auth.
2023-07-21 12:04:44         If you want to set persistSession to true, please provide a storage option or you may set persistSession to false to disable this warning.

(The warning is there on my other setup which is working)

This is what I get when I run supabase start (it logs the same log as above before as well):

service not healthy: [supabase_studio_supabase-test]
Try rerunning the command with --debug to troubleshoot the error.

Docker Engine: v24.0.2 Supabase CLI: v1.77.9 OS: macOS 13.4 Node: v18.16.1

I've tried on a brand new supabase project and an existing project with the same results.

Nnanyielugo commented 1 year ago

I'm experiencing this issue with the supabase_studio_ container. I've tried all the solutions other people have found across the similar issues (destroying and redownloading images/containers, increasing memory resources, and ignoring health checks). It seems none of this is working as the container is stuck in a restart loop.

There isn't anything in the container logs to help:

2023-07-21 11:50:42 info  - Loaded env from /app/studio/.env
2023-07-21 11:50:42 Listening on port 3000
2023-07-21 11:51:43 info  - Loaded env from /app/studio/.env
2023-07-21 11:51:43 Listening on port 3000
2023-07-21 11:52:44 info  - Loaded env from /app/studio/.env
2023-07-21 11:52:44 Listening on port 3000
2023-07-21 12:02:41 info  - Loaded env from /app/studio/.env
2023-07-21 12:02:41 Listening on port 3000
2023-07-21 12:03:42 info  - Loaded env from /app/studio/.env
2023-07-21 12:03:42 Listening on port 3000
2023-07-21 12:04:43 info  - Loaded env from /app/studio/.env
2023-07-21 12:04:43 Listening on port 3000
2023-07-21 12:04:44 No storage option exists to persist the session, which may result in unexpected behavior when using auth.
2023-07-21 12:04:44         If you want to set persistSession to true, please provide a storage option or you may set persistSession to false to disable this warning.

(The warning is there on my other setup which is working)

This is what I get when I run supabase start (it logs the same log as above before as well):

service not healthy: [supabase_studio_supabase-test]
Try rerunning the command with --debug to troubleshoot the error.

Docker Engine: v24.0.2 Supabase CLI: v1.77.9 OS: macOS 13.4 Node: v18.16.1

I've tried on a brand new supabase project and an existing project with the same results.

@xHergz, same issue as well. Various 'fixes' tried, and no luck with any of them.

malachif-jpg commented 1 year ago

I'm experiencing this issue with the supabase_studio_ container. I've tried all the solutions other people have found across the similar issues (destroying and redownloading images/containers, increasing memory resources, and ignoring health checks). It seems none of this is working as the container is stuck in a restart loop. There isn't anything in the container logs to help:

2023-07-21 11:50:42 info  - Loaded env from /app/studio/.env
2023-07-21 11:50:42 Listening on port 3000
2023-07-21 11:51:43 info  - Loaded env from /app/studio/.env
2023-07-21 11:51:43 Listening on port 3000
2023-07-21 11:52:44 info  - Loaded env from /app/studio/.env
2023-07-21 11:52:44 Listening on port 3000
2023-07-21 12:02:41 info  - Loaded env from /app/studio/.env
2023-07-21 12:02:41 Listening on port 3000
2023-07-21 12:03:42 info  - Loaded env from /app/studio/.env
2023-07-21 12:03:42 Listening on port 3000
2023-07-21 12:04:43 info  - Loaded env from /app/studio/.env
2023-07-21 12:04:43 Listening on port 3000
2023-07-21 12:04:44 No storage option exists to persist the session, which may result in unexpected behavior when using auth.
2023-07-21 12:04:44         If you want to set persistSession to true, please provide a storage option or you may set persistSession to false to disable this warning.

(The warning is there on my other setup which is working) This is what I get when I run supabase start (it logs the same log as above before as well):

service not healthy: [supabase_studio_supabase-test]
Try rerunning the command with --debug to troubleshoot the error.

Docker Engine: v24.0.2 Supabase CLI: v1.77.9 OS: macOS 13.4 Node: v18.16.1 I've tried on a brand new supabase project and an existing project with the same results.

@xHergz, same issue as well. Various 'fixes' tried, and no luck with any of them.

@xHergz @Nnanyielugo Did you ever figure this out? Having the same issue.

bllchmbrs commented 1 year ago

Same issue here.

HendrikRunte commented 1 year ago

Same here.

malachif-jpg commented 1 year ago

@bllchmbrs @HendrikRunte

If you do not need to use Supabase Studio, (I just needed to test/deploy edge functions so this worked fine for me) you can run supabase start with the --exclude/-x flag to exclude its startup to avoid this error. E.g., supabase start -x studio should get things running, though you won't have access to the local studio gui.

prodkt commented 1 year ago

Same issue, unable to run supabase locally on multiple machine environments. Appears a wide spread issue at least for us.

toddsampson commented 1 year ago

I had the same issue and was above the 7GB recommended Docker fix. I did get it working by updating Studio. I saw in another post related to running Supabase self-hosted (not local development) that there is an issue with supabase/studio:0.23.06. As such, if you fork https://github.com/supabase/cli and change line 23 in internal/utils/misc.go from StudioImage = "supabase/studio:v0.23.06" to StudioImage = "supabase/studio, run go build -o supabase . to build the binary file and then copy it over the existing node_modules/supabase/bin/supabase in your project and you should be good to go. Be sure to run npx supabase stop --no-backup before you run npx supabase start again.

I didn't want to put in a PR for this since I just removed the version number and couldn't figure out the correct version to use that matched the latest Docker build. That said, feel free to grab the updated binary or source from my fork to build it yourself: https://github.com/toddsampson/cli

thomergil commented 1 year ago

https://github.com/supabase/cli/issues/1083#issuecomment-1691431279

huilensolis commented 2 months ago

HI, same issue, no error logs. performed in a fresh project

kouwasi commented 2 months ago

@Huilensolis Have you tried this? supabase start --ignore-health-check

huilensolis commented 2 months ago

@kouwasi yes, and then ran supabase status and the database was the only service that was up, everything else broke