supabase / cli

Supabase CLI. Manage postgres migrations, run Supabase locally, deploy edge functions. Postgres backups. Generating types from your database schema.
https://supabase.com/docs/reference/cli/about
MIT License
1.07k stars 209 forks source link

Supabase Running Locally `realtime` schema's Tables Not Being Created By CLI After `supabase db reset` but are created after hard restart #1073

Closed ghost closed 1 year ago

ghost commented 1 year ago

[Edit: I wasn't sure if this was better reported here or under the cli project, and am happy to move it if this is an issue with the cli and not with the base supabase instance itself)

I was having issues with realtime and running supabase locally that did not exist in my clone running in Supabase's cloud. I made sure that react's strict mode was disabled, that replication was enabled, that I was using the latest versions of both the CLI and the javascript client, etc.

I was able to narrow down the issue to this:

  1. running supabase db reset is clearing the tables in the realtime schema
  2. however, running supabase stop and then immediately supabase start re-generates the tables in the realtime schema
  3. but, even after running the supabase stop and supabase start (which applied my seed.sql and migrations, both attached), if I subsequently run supabase db reset, these tables are deleted again

It seems, from the CLI output, that running supabase start runs supabase db reset, so it's strange to me that only when I manually run supabase db reset do those tables get deleted (leading me to believe this is not an issue with my migrations or seed).

[Edit: the reason why this is an issue for me is that the subscriptions to realtime are failing and are not registering (they report in the browser as successfully subscribed, but when I change data in the public schema, nothing happens on the frontend, but once the tables in the realtime schema do exists, such as after hard-restarting supabase, everything works perfectly); attempting to subscribe on the frontend/browser is not creating the tables in the realtime schema]

I'll put a GIF below of me re-creating this error, as well as all of my migrations and seed files. As seen in the video, I take the exact steps that I document above, where re-starting local supabase applies all of my migrations and seed.sql, while not disrupting the realtime schema, but once I manually run supabase db reset, the tables in realtime schema disappear.

2023-05-03 23 54 24

migrations.zip seed.sql.zip

ghost commented 1 year ago

@gregnr do you think that this could be related to the function you helped me write to automatically enable realtime on all tables in the public schema here? I didn't think so, since the function runs perfectly without disrupting anything on supabase start, but I thought I might ask since you helped me out before.

DanielSidhion commented 1 year ago

I'm also affected by this. To give some more details:

After creating a subscription with supabase-js, the callback to RealtimeChannel.subscribe() receives a status with value SUBSCRIBED. However, inspecting the websocket that gets created, I see multiple error messages from the realtime server like the following:

Subscribing to PostgreSQL failed: {:error, "Subscription insert failed with error: ERROR 42P01 (undefined_table) relation \"realtime.subscription\" does not exist\n\n    query: with sub_tables as (\n\t\t    select\n\t\t\t    rr.entity\n\t\t    from\n\t\t\t    pg_publication_tables pub,\n\t\t\t    lateral (\n\t\t\t\t    select\n\t\t\t\t\t    format('%I.%I', pub.schemaname, pub.tablename)::regclass entity\n\t\t\t    ) rr\n\t\t    where\n\t\t\t    pub.pubname = $1\n\t\t\t  and pub.schemaname like (case $2 when '*' then '%' else $2 end)\n\t\t\t  and pub.tablename like (case $3 when '*' then '%' else $3 end)\n\t    )\n\t    insert into realtime.subscription as x(\n\t\t    subscription_id,\n\t\t    entity,\n        filters,\n        claims\n      )\n      select\n        $4::text::uuid,\n        sub_tables.entity,\n        $6,\n        $5\n      from\n        sub_tables\n        on conflict\n        (subscription_id, entity, filters)\n        do update set\n        claims = excluded.claims,\n        created_at = now()\n      returning\n         id. Check that tables are part of publication supabase_realtime and subscription params are correct: %{\"event\" => \"INSERT\", \"schema\" => \"public\", \"table\" => \"github_repositories\"}"}

Running supabase stop and then supabase start creates the tables for realtime to work, and then I start getting updates normally.

ghost commented 1 year ago

I can confirm the same behavior that @DanielSidhion is seeing, including that I am also getting the SUBSCRIBED status from these subscriptions that are actually failing.

ghost commented 1 year ago

Upon running supabase db reset, I can see the following errors being thrown in the Docker realtime-dev.supabase_realtime_frontend:

2023-05-07 21:32:26 01:32:26.833 [error] Postgrex.Protocol (#PID<0.2802.0>) disconnected: ** (Postgrex.Error) FATAL 57P01 (admin_shutdown) terminating connection due to administrator command
2023-05-07 21:32:26 01:32:26.834 [error] Postgrex.Protocol (#PID<0.2803.0>) disconnected: ** (Postgrex.Error) FATAL 57P01 (admin_shutdown) terminating connection due to administrator command
2023-05-07 21:32:26 01:32:26.834 [error] Postgrex.Protocol (#PID<0.2798.0>) disconnected: ** (Postgrex.Error) FATAL 57P01 (admin_shutdown) terminating connection due to administrator command
2023-05-07 21:32:26 01:32:26.834 [error] Postgrex.Protocol (#PID<0.2801.0>) disconnected: ** (Postgrex.Error) FATAL 57P01 (admin_shutdown) terminating connection due to administrator command
2023-05-07 21:32:26 01:32:26.834 [error] Postgrex.Protocol (#PID<0.2797.0>) disconnected: ** (Postgrex.Error) FATAL 57P01 (admin_shutdown) terminating connection due to administrator command
2023-05-07 21:32:36 01:32:36.845 [error] Postgrex.Protocol (#PID<0.2801.0>) disconnected: ** (DBConnection.ConnectionError) tcp recv (idle): closed
2023-05-07 21:32:36 01:32:36.845 [error] Postgrex.Protocol (#PID<0.2802.0>) disconnected: ** (DBConnection.ConnectionError) tcp recv (idle): closed
2023-05-07 21:32:36 01:32:36.846 [error] Postgrex.Protocol (#PID<0.2798.0>) disconnected: ** (DBConnection.ConnectionError) tcp recv (idle): closed
2023-05-07 21:32:36 01:32:36.846 [error] Postgrex.Protocol (#PID<0.2797.0>) disconnected: ** (DBConnection.ConnectionError) tcp recv (idle): closed
2023-05-07 21:32:36 01:32:36.846 [error] Postgrex.Protocol (#PID<0.2803.0>) disconnected: ** (DBConnection.ConnectionError) tcp recv (idle): closed
ghost commented 1 year ago

More log dumps:

Here is the realtime-dev.supabase_realtime_frontend logs right after calling supabase start (which seems to run as expected):

2023-05-07 21:38:11 01:38:11.109 [info] == Running 20210706140551 Realtime.Repo.Migrations.CreateTenants.change/0 forward
2023-05-07 21:38:11 01:38:11.111 [info] create table tenants
2023-05-07 21:38:11 01:38:11.118 [info] create index tenants_external_id_index
2023-05-07 21:38:11 01:38:11.124 [info] == Migrated 20210706140551 in 0.0s
2023-05-07 21:38:11 01:38:11.154 [info] == Running 20220329161857 Realtime.Repo.Migrations.AddExtensionsTable.change/0 forward
2023-05-07 21:38:11 01:38:11.155 [info] create table extensions
2023-05-07 21:38:11 01:38:11.161 [info] create index extensions_tenant_external_id_type_index
2023-05-07 21:38:11 01:38:11.164 [info] == Migrated 20220329161857 in 0.0s
2023-05-07 21:38:11 01:38:11.169 [info] == Running 20220410212326 Realtime.Repo.Migrations.AddTenantMaxEps.up/0 forward
2023-05-07 21:38:11 01:38:11.169 [info] alter table tenants
2023-05-07 21:38:11 01:38:11.184 [info] trigger "secrets_encrypt_secret_trigger_secret" for relation "vault.secrets" does not exist, skipping
2023-05-07 21:38:11 01:38:11.184 [info] drop cascades to trigger secrets_encrypt_secret_trigger_secret on table vault.secrets
2023-05-07 21:38:11 01:38:11.184 [info] == Migrated 20220410212326 in 0.0s
2023-05-07 21:38:11 01:38:11.187 [info] == Running 20220506102948 Realtime.Repo.Migrations.RenamePollIntervalToPollIntervalMs.up/0 forward
2023-05-07 21:38:11 01:38:11.204 [debug] QUERY OK source="extensions" db=0.2ms
2023-05-07 21:38:11 SELECT e0."id", e0."type", e0."settings", e0."tenant_external_id", e0."inserted_at", e0."updated_at" FROM "extensions" AS e0 WHERE (e0."type" = $1) ["postgres_cdc_rls"]
2023-05-07 21:38:11 01:38:11.204 [info] == Migrated 20220506102948 in 0.0s
2023-05-07 21:38:11 01:38:11.212 [info] == Running 20220527210857 Realtime.Repo.Migrations.AddExternalIdUniqIndex.change/0 forward
2023-05-07 21:38:11 01:38:11.212 [info] execute "alter table tenants add constraint uniq_external_id unique (external_id)"
2023-05-07 21:38:11 01:38:11.221 [info] trigger "secrets_encrypt_secret_trigger_secret" for relation "vault.secrets" does not exist, skipping
2023-05-07 21:38:11 01:38:11.221 [info] drop cascades to trigger secrets_encrypt_secret_trigger_secret on table vault.secrets
2023-05-07 21:38:11 01:38:11.221 [info] == Migrated 20220527210857 in 0.0s
2023-05-07 21:38:11 01:38:11.224 [info] == Running 20220815211129 Realtime.Repo.Migrations.NewMaxEventsPerSecondDefault.change/0 forward
2023-05-07 21:38:11 01:38:11.224 [info] alter table tenants
2023-05-07 21:38:11 01:38:11.233 [info] trigger "secrets_encrypt_secret_trigger_secret" for relation "vault.secrets" does not exist, skipping
2023-05-07 21:38:11 01:38:11.233 [info] drop cascades to trigger secrets_encrypt_secret_trigger_secret on table vault.secrets
2023-05-07 21:38:11 01:38:11.233 [info] == Migrated 20220815211129 in 0.0s
2023-05-07 21:38:11 01:38:11.245 [info] == Running 20220815215024 Realtime.Repo.Migrations.SetCurrentMaxEventsPerSecond.change/0 forward
2023-05-07 21:38:11 01:38:11.245 [info] execute "update tenants set max_events_per_second = 1000"
2023-05-07 21:38:11 01:38:11.247 [info] == Migrated 20220815215024 in 0.0s
2023-05-07 21:38:11 01:38:11.249 [info] == Running 20220818141501 Realtime.Repo.Migrations.ChangeLimitsDefaults.change/0 forward
2023-05-07 21:38:11 01:38:11.249 [info] alter table tenants
2023-05-07 21:38:11 01:38:11.257 [info] trigger "secrets_encrypt_secret_trigger_secret" for relation "vault.secrets" does not exist, skipping
2023-05-07 21:38:11 01:38:11.257 [info] drop cascades to trigger secrets_encrypt_secret_trigger_secret on table vault.secrets
2023-05-07 21:38:11 01:38:11.257 [info] == Migrated 20220818141501 in 0.0s
2023-05-07 21:38:11 01:38:11.262 [info] == Running 20221018173709 Realtime.Repo.Migrations.AddCdcDefault.up/0 forward
2023-05-07 21:38:11 01:38:11.263 [info] alter table tenants
2023-05-07 21:38:11 01:38:11.270 [info] trigger "secrets_encrypt_secret_trigger_secret" for relation "vault.secrets" does not exist, skipping
2023-05-07 21:38:11 01:38:11.270 [info] drop cascades to trigger secrets_encrypt_secret_trigger_secret on table vault.secrets
2023-05-07 21:38:11 01:38:11.270 [info] == Migrated 20221018173709 in 0.0s
2023-05-07 21:38:11 01:38:11.273 [info] == Running 20221102172703 Realtime.Repo.Migrations.RenamePgType.up/0 forward
2023-05-07 21:38:11 01:38:11.273 [info] execute "update extensions set type = 'postgres_cdc_rls'"
2023-05-07 21:38:11 01:38:11.274 [info] == Migrated 20221102172703 in 0.0s
2023-05-07 21:38:11 01:38:11.283 [info] == Running 20221223010058 Realtime.Repo.Migrations.DropTenantsUniqExternalIdIndex.change/0 forward
2023-05-07 21:38:11 01:38:11.283 [info] execute "ALTER TABLE IF EXISTS tenants DROP CONSTRAINT IF EXISTS uniq_external_id"
2023-05-07 21:38:11 01:38:11.289 [info] trigger "secrets_encrypt_secret_trigger_secret" for relation "vault.secrets" does not exist, skipping
2023-05-07 21:38:11 01:38:11.289 [info] drop cascades to trigger secrets_encrypt_secret_trigger_secret on table vault.secrets
2023-05-07 21:38:11 01:38:11.289 [info] == Migrated 20221223010058 in 0.0s
2023-05-07 21:38:11 01:38:11.295 [info] == Running 20230110180046 Realtime.Repo.Migrations.AddLimitsFieldsToTenants.change/0 forward
2023-05-07 21:38:11 01:38:11.295 [info] alter table tenants
2023-05-07 21:38:11 01:38:11.301 [info] trigger "secrets_encrypt_secret_trigger_secret" for relation "vault.secrets" does not exist, skipping
2023-05-07 21:38:11 01:38:11.301 [info] drop cascades to trigger secrets_encrypt_secret_trigger_secret on table vault.secrets
2023-05-07 21:38:11 01:38:11.301 [info] == Migrated 20230110180046 in 0.0s
2023-05-07 21:38:12 01:38:12.021 [debug] QUERY OK db=1.5ms queue=119.3ms idle=0.0ms
2023-05-07 21:38:12 begin []
2023-05-07 21:38:12 01:38:12.066 [debug] QUERY OK source="tenants" db=1.4ms
2023-05-07 21:38:12 SELECT t0."id", t0."name", t0."external_id", t0."jwt_secret", t0."postgres_cdc_default", t0."max_concurrent_users", t0."max_events_per_second", t0."max_bytes_per_second", t0."max_channels_per_client", t0."max_joins_per_second", t0."inserted_at", t0."updated_at" FROM "tenants" AS t0 WHERE (t0."external_id" = $1) ["realtime-dev"]
2023-05-07 21:38:12 01:38:12.135 [debug] QUERY OK db=2.0ms
2023-05-07 21:38:12 INSERT INTO "tenants" ("external_id","jwt_secret","max_bytes_per_second","max_channels_per_client","max_concurrent_users","max_events_per_second","max_joins_per_second","name","inserted_at","updated_at","id") VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11) ["realtime-dev", "iNjicxc4+llvc9wovDvqymwfnj9teWMlyOIbJ8Fh6j2WNU8CIJ2ZgjR6MUIKqSmeDmvpsKLsZ9jgXJmQPpwL8w==", 100000, 100, 200, 100, 500, "realtime-dev", ~N[2023-05-08 01:38:12], ~N[2023-05-08 01:38:12], <<199, 83, 124, 121, 92, 208, 73, 101, 135, 171, 171, 81, 92, 118, 41, 91>>]
2023-05-07 21:38:12 01:38:12.151 [debug] QUERY OK db=8.8ms
2023-05-07 21:38:12 INSERT INTO "extensions" ("settings","tenant_external_id","type","inserted_at","updated_at","id") VALUES ($1,$2,$3,$4,$5,$6) [%{"db_host" => "fzgJuVLRPRRf232hDLqGnzPJYQsx9JoW0zfW41rVfj0=", "db_name" => "sWBpZNdjggEPTQVlI52Zfw==", "db_password" => "sWBpZNdjggEPTQVlI52Zfw==", "db_port" => "+enMDFi1J/3IrrquHHwUmA==", "db_user" => "uxbEq/zz8DXVD53TOI1zmw==", "ip_version" => 4, "poll_interval_ms" => 100, "poll_max_changes" => 100, "poll_max_record_bytes" => 1048576, "publication" => "supabase_realtime", "region" => "us-east-1", "slot_name" => "supabase_realtime_replication_slot"}, "realtime-dev", "postgres_cdc_rls", ~N[2023-05-08 01:38:12], ~N[2023-05-08 01:38:12], <<99, 6, 45, 0, 140, 60, 64, 107, 168, 157, 10, 69, 115, 84, 13, 142>>]
2023-05-07 21:38:12 01:38:12.161 [debug] QUERY OK db=9.3ms
2023-05-07 21:38:12 commit []
2023-05-07 21:38:13 01:38:13.760 [notice]     :alarm_handler: {:set, {:system_memory_high_watermark, []}}
2023-05-07 21:38:13 01:38:13.809 [info] Elixir.Realtime.SignalHandler is being initialized...
2023-05-07 21:38:13 01:38:13.810 [notice] SYN[realtime@127.0.0.1] Adding node to scope <users>
2023-05-07 21:38:13 01:38:13.810 [notice] SYN[realtime@127.0.0.1] Creating tables for scope <users>
2023-05-07 21:38:13 01:38:13.812 [notice] SYN[realtime@127.0.0.1|registry<users>] Discovering the cluster
2023-05-07 21:38:13 01:38:13.813 [notice] SYN[realtime@127.0.0.1|pg<users>] Discovering the cluster
2023-05-07 21:38:13 01:38:13.813 [notice] SYN[realtime@127.0.0.1] Adding node to scope <Elixir.RegionNodes>
2023-05-07 21:38:13 01:38:13.813 [notice] SYN[realtime@127.0.0.1] Creating tables for scope <Elixir.RegionNodes>
2023-05-07 21:38:13 01:38:13.813 [notice] SYN[realtime@127.0.0.1|registry<Elixir.RegionNodes>] Discovering the cluster
2023-05-07 21:38:13 01:38:13.813 [notice] SYN[realtime@127.0.0.1|pg<Elixir.RegionNodes>] Discovering the cluster
2023-05-07 21:38:13 01:38:13.828 [info] Running RealtimeWeb.Endpoint with cowboy 2.9.0 at :::4000 (http)
2023-05-07 21:38:13 01:38:13.829 [info] Access RealtimeWeb.Endpoint at http://realtime.fly.dev
2023-05-07 21:38:13 01:38:13.829 [notice] SYN[realtime@127.0.0.1] Adding node to scope <Elixir.PostgresCdcStream>
2023-05-07 21:38:13 01:38:13.830 [notice] SYN[realtime@127.0.0.1] Creating tables for scope <Elixir.PostgresCdcStream>
2023-05-07 21:38:13 01:38:13.830 [notice] SYN[realtime@127.0.0.1|registry<Elixir.PostgresCdcStream>] Discovering the cluster
2023-05-07 21:38:13 01:38:13.830 [notice] SYN[realtime@127.0.0.1|pg<Elixir.PostgresCdcStream>] Discovering the cluster
2023-05-07 21:38:13 01:38:13.831 [notice] SYN[realtime@127.0.0.1] Adding node to scope <Elixir.Extensions.PostgresCdcRls>
2023-05-07 21:38:13 01:38:13.831 [notice] SYN[realtime@127.0.0.1] Creating tables for scope <Elixir.Extensions.PostgresCdcRls>
2023-05-07 21:38:13 01:38:13.831 [notice] SYN[realtime@127.0.0.1|registry<Elixir.Extensions.PostgresCdcRls>] Discovering the cluster
2023-05-07 21:38:13 01:38:13.831 [notice] SYN[realtime@127.0.0.1|pg<Elixir.Extensions.PostgresCdcRls>] Discovering the cluster
2023-05-07 21:38:16 01:38:16.814 [debug] Tzdata polling for update.
2023-05-07 21:38:22 01:38:22.047 [info] tzdata release in place is from a file last modified Fri, 22 Oct 2021 02:20:47 GMT. Release file on server was last modified Tue, 28 Mar 2023 20:25:39 GMT.
2023-05-07 21:38:22 01:38:22.047 [debug] Tzdata downloading new data from https://data.iana.org/time-zones/tzdata-latest.tar.gz
2023-05-07 21:38:22 01:38:22.174 [debug] Tzdata data downloaded. Release version 2023c.
2023-05-07 21:38:22 01:38:22.544 [info] Tzdata has updated the release from 2021e to 2023c
2023-05-07 21:38:22 01:38:22.544 [debug] Tzdata deleting ETS table for version 2021e
2023-05-07 21:38:22 01:38:22.545 [debug] Tzdata deleting ETS table file for version 2021e
ghost commented 1 year ago

@DanielSidhion I did notice that manually restarting realtime-dev.supabase_realtime_frontend from Docker Desktop, which is much faster than running supabase stop && supabase start seems to clear the issue upon running supabase db reset, but I'm having intermittent/unreliable success with this and not sure that it's working 100% of the time; thought I might mention it in case it helps you in the mean time while this bug is investigated by supabase.

gregnr commented 1 year ago

@nbarrow-inspire-labs @DanielSidhion sorry to hear you're experiencing this. After a bit of digging:

I'm going to transfer this issue to the CLI repo to debug further.

sweatybridge commented 1 year ago

Yup, I will also add realtime to the list of services to be restarted after reset. Thanks for reporting this issue.

ghost commented 1 year ago

@gregnr @sweatybridge appreciate the quick turnaround; thank you very much!