coollabsio / coolify

An open-source & self-hostable Heroku / Netlify / Vercel alternative.
https://coolify.io
Apache License 2.0
26.46k stars 1.34k forks source link

[Bug]: Supabase containers keep restarting due to authentication-related error #2696

Open JohnGeek-dev opened 1 week ago

JohnGeek-dev commented 1 week ago

Description

When attempting to deploy Supabase using Coolify v4.0.0-beta.306, the process fails, and the logs of the containers indicate an authentication-related error.

Do note that it works fine on v4.0.0-beta.297 except for that fact that:

  1. Minio Createbucket container fails to run and exits.
  2. Supabase Rest and Realtime Dev shows running (unhealthy).

Minimal Reproduction (if possible, example repository)

  1. Upgrade to Coolify v4.0.0-beta.306.
  2. Attempt to deploy Supabase.
  3. Observe the failure in the deployment process. Several containers would keep restarting.
  4. Check logs of the failed containers.

Exception or Error

No response

Version

v4.0.0-beta.306

MedLeon commented 1 week ago

I have the same error. And some user on Discord (Moritz) also seems to have it.

olsoda commented 1 week ago

Same here. It’s been one thing or another with Supabase on the last few beta releases

Mortalife commented 1 week ago

There was only one small change (since 297) to the template which I wouldn't have expected to cause the issue: https://github.com/coollabsio/coolify/compare/v4.0.0-beta.297...v4.0.0-beta.306 search supabase

So I can only presume there's some issue with parsing/injecting env variables?

MauruschatM commented 1 week ago

Same for me, supabase doesn't work due to the supabase-db module. The supabase_admin won't be created, I think

Mortalife commented 1 week ago

For me the supabase-db boots, but the supabase-analytics doesn't and most of the containers depend on supabase-analytics, the logs say that the password for supabase_admin is incorrect, which is causing supabase-analytics to crash because the migrations can't run. That was my experience yesterday evening at least.

Mortalife commented 1 week ago

If I were a betting man, I'd say it was this commit: https://github.com/coollabsio/coolify/commit/1266810c4d8edfd2522ba8a7ab703f522c0e34cd

MauruschatM commented 1 week ago

If I were a betting man, I'd say it was this commit: 1266810

No, it already did'nt work on Monday

MauruschatM commented 1 week ago

For me the supabase-db boots, but the supabase-analytics doesn't and most of the containers depend on supabase-analytics, the logs say that the password for supabase_admin is incorrect, which is causing supabase-analytics to crash because the migrations can't run. That was my experience yesterday evening at least.

Yes, because the supabase_admin user won't be created. You can see this inside of the supabase-db logs

Mortalife commented 1 week ago

If I were a betting man, I'd say it was this commit: 1266810

No, it already did'nt work on Monday

Fair, I read through it in more detail and if I were a betting man, I'd have lost money! Haha Double checked the env's passed to the containers and it's correct. So my hypothesis was incorrect.

Skeyelab commented 1 week ago

I am also experiencing this.

Mortalife commented 1 week ago

I've figured out the issue, can replicate and mitigate.

Coolify is setting the POSTGRES_HOST parameter to the POSTGRES_HOST environment parameter even though it has a hard coded value set.

You can resolve the issue by setting POSTGRES_HOST to some other value like POSTGRES_HOSTNAME, change all instances of POSTGRES_HOST parameter inside the docker-compose and then delete the POSTGRES_HOST after saving.

Issue: Postgres runs the init scripts before the network connection is ready by connecting directly to the socket, which is why for the POSTGRES_HOST=/var/run/postgresql env variable on supabase-db it's set to a path.

When the env is incorrectly overridden the value is set to supabase-db which resolves to the docker network which isn't initialised and is also not able to use the local access root.

I might still be on to win my bet. 😂

Refs:

https://github.com/docker-library/postgres/issues/941 https://raw.githubusercontent.com/docker-library/postgres/master/15/bullseye/docker-entrypoint.sh https://github.com/supabase/postgres/blob/develop/migrations/db/migrate.sh

MMTE commented 1 week ago

I've figured out the issue, can replicate and mitigate.

@Mortalife seems you are right. I was guessing maybe other services are loading sooner than DB but I believe it should be an environment conflict as you mentioned. I hope supabase make it's deployment process more robust in future. it's a little tricky now.

MauruschatM commented 1 week ago

I've figured out the issue, can replicate and mitigate.

@Mortalife seems you are right. I was guessing maybe other services are loading sooner than DB but I believe it should be an environment conflict as you mentioned. I hope supabase make it's deployment process more robust in future. it's a little tricky now.

How would they sell their cloud services if self hosting was that easy? It's just marketing and it has to be somehow possible. But they don't want the mass to self host supabase..

yyassif commented 1 week ago

I am having same issues too, even after removing analytics service Error: FATAL: 28P01: password authentication failed for user "supabase_admin"

Torwent commented 1 week ago

@Mortalife solution works! Thank you!

agalev commented 1 week ago

@Mortalife This worked for me as well, thank you for the fix!

MedLeon commented 1 week ago

The fix works for me aswell, but "Minio Createbucket" does not start. Did that work for you with this fix, @Torwent & @agalev ?

Mortalife commented 1 week ago

The fix works for me aswell, but "Minio Createbucket" does not start. Did that work for you with this fix, @Torwent & @agalev ?

It didn't start before this issue.

To clarify, it shouldn't be running. It runs once to ensure the mimo server has the default bucket created that's used by the storage server.

https://github.com/coollabsio/coolify/blob/main/templates/compose/supabase.yaml#L1067-L1071

It's creates the stub bucket, and then proceeds to exit. It has a no restart set.

The stub bucket is used by the storage server here: https://github.com/coollabsio/coolify/blob/main/templates/compose/supabase.yaml#L1104

olsoda commented 1 week ago

That workaround seemed to work for me, however supabase-rest is still unhealthy and in the API Docs of the dashboard, says public isn't accessible ...

Screenshot 2024-06-30 at 3 49 13 PM
Mortalife commented 1 week ago

That workaround seemed to work for me, however supabase-rest is still unhealthy and in the API Docs of the dashboard, says public isn't accessible ...

I don't experience that problem. I would double check you've replaced all of the POSTGRES_HOST variable instances and there aren't any extra spaces etc where there shouldn't be. If it still remains, it might be worth removing the supabase db volume and restarting.

MMTE commented 1 week ago

@Mortalife do you mind making that a pull request? I mean is there any other configuration that must be considered or just this hard-coded POSTGRES_HOST in postgresql was the probelm? if so maybe we can make a PR and make this issue as fixed?

Mortalife commented 1 week ago

I think I'd rather the env variables be correctly parsed than putting a PR up for this work around. PR don't seem to be approved with much velocity so it won't change things immediately regardless.

MMTE commented 1 week ago

I understand. personally I had much difficulties deploy supabase instances as separate projects. coolify at least made it easy. and on the other hand supabase also is under development so maybe we have a lot of breaking coming forward.

Torwent commented 1 week ago

The fix works for me aswell, but "Minio Createbucket" does not start. Did that work for you with this fix, @Torwent & @agalev ?

I'm pretty sure that's not meant to be running. Runs once the very first time you start things up to create the minIO bucket and never runs again AFAIK

deozza commented 1 week ago

Hello @Mortalife and sorry to bother you.

I just ran into this issue and found about your solution.

Could you please clarify a bit what needs to be changed ? I don't understand where and what values are causing the issue.

As I understood, in the .env file, I need to add a new parameter called POSTGRES_HOSTNAME with supabase-dbas its value. And replace all iterations of POSTGRES_HOST in the docker-compose .yml file by POSTGRES_HOSTNAME ? Am I right or missed the point ?

Mortalife commented 1 week ago

Hello @Mortalife and sorry to bother you.

I just ran into this issue and found about your solution.

Could you please clarify a bit what needs to be changed ? I don't understand where and what values are causing the issue.

As I understood, in the .env file, I need to add a new parameter called POSTGRES_HOSTNAME with supabase-dbas its value. And replace all iterations of POSTGRES_HOST in the docker-compose .yml file by POSTGRES_HOSTNAME ? Am I right or missed the point ?

Correct, and then once you've done that, remove POSTGRES_HOST from the .env then restart.

deozza commented 1 week ago

Sorry again, this is surely an error between the chair and the keyboard, but my analytics service is still failing to start. Due to that password authentication failed for user "supabase_admin" errro from the supabase-analytics service.

Here is my docker-compose.yml file :

services:
  supabase-kong:
    image: 'kong:2.8.1'
    entrypoint: 'bash -c ''eval "echo \"$$(cat ~/temp.yml)\"" > ~/kong.yml && /docker-entrypoint.sh kong docker-start'''
    depends_on:
      supabase-analytics:
        condition: service_healthy
    environment:
      - SERVICE_FQDN_SUPABASEKONG
      - 'JWT_SECRET=${SERVICE_PASSWORD_JWT}'
      - KONG_DATABASE=off
      - KONG_DECLARATIVE_CONFIG=/home/kong/kong.yml
      - 'KONG_DNS_ORDER=LAST,A,CNAME'
      - 'KONG_PLUGINS=request-transformer,cors,key-auth,acl,basic-auth'
      - KONG_NGINX_PROXY_PROXY_BUFFER_SIZE=160k
      - 'KONG_NGINX_PROXY_PROXY_BUFFERS=64 160k'
      - 'SUPABASE_ANON_KEY=${SERVICE_SUPABASEANON_KEY}'
      - 'SUPABASE_SERVICE_KEY=${SERVICE_SUPABASESERVICE_KEY}'
      - 'DASHBOARD_USERNAME=${SERVICE_USER_ADMIN}'
      - 'DASHBOARD_PASSWORD=${SERVICE_PASSWORD_ADMIN}'
    volumes:
      -
        type: bind
        source: ./volumes/api/kong.yml
        target: /home/kong/temp.yml
  supabase-studio:
    image: 'supabase/studio:20240514-6f5cabd'
    healthcheck:
      test:
        - CMD
        - node
        - '-e'
        - "require('http').get('http://127.0.0.1:3000/api/profile', (r) => {if (r.statusCode !== 200) process.exit(1); else process.exit(0); }).on('error', () => process.exit(1))"
      timeout: 5s
      interval: 5s
      retries: 3
    depends_on:
      supabase-analytics:
        condition: service_healthy
    environment:
      - HOSTNAME=0.0.0.0
      - 'STUDIO_PG_META_URL=http://supabase-meta:8080'
      - 'POSTGRES_PASSWORD=${SERVICE_PASSWORD_POSTGRES}'
      - 'DEFAULT_ORGANIZATION_NAME=${STUDIO_DEFAULT_ORGANIZATION:-Default Organization}'
      - 'DEFAULT_PROJECT_NAME=${STUDIO_DEFAULT_PROJECT:-Default Project}'
      - 'SUPABASE_URL=http://supabase-kong:8000'
      - 'SUPABASE_PUBLIC_URL=${SERVICE_FQDN_SUPABASEKONG}'
      - 'SUPABASE_ANON_KEY=${SERVICE_SUPABASEANON_KEY}'
      - 'SUPABASE_SERVICE_KEY=${SERVICE_SUPABASESERVICE_KEY}'
      - 'AUTH_JWT_SECRET=${SERVICE_PASSWORD_JWT}'
      - 'LOGFLARE_API_KEY=${SERVICE_PASSWORD_LOGFLARE}'
      - 'LOGFLARE_URL=http://supabase-analytics:4000'
      - NEXT_PUBLIC_ENABLE_LOGS=true
      - NEXT_ANALYTICS_BACKEND_PROVIDER=postgres
  supabase-db:
    image: 'supabase/postgres:15.1.1.41'
    healthcheck:
      test: 'pg_isready -U postgres -h 127.0.0.1'
      interval: 5s
      timeout: 5s
      retries: 10
    depends_on:
      supabase-vector:
        condition: service_healthy
    command:
      - postgres
      - '-c'
      - config_file=/etc/postgresql/postgresql.conf
      - '-c'
      - log_min_messages=fatal
    restart: unless-stopped
    environment:
      - POSTGRES_HOST=/var/run/postgresql
      - 'PGPORT=${POSTGRES_PORT:-5432}'
      - 'POSTGRES_PORT=${POSTGRES_PORT:-5432}'
      - 'PGPASSWORD=${SERVICE_PASSWORD_POSTGRES}'
      - 'POSTGRES_PASSWORD=${SERVICE_PASSWORD_POSTGRES}'
      - 'PGDATABASE=${POSTGRES_DB:-postgres}'
      - 'POSTGRES_DB=${POSTGRES_DB:-postgres}'
      - 'JWT_SECRET=${SERVICE_PASSWORD_JWT}'
      - 'JWT_EXP=${JWT_EXPIRY:-3600}'
    volumes:
      - 'supabase-db-data:/var/lib/postgresql/data'
      -
        type: bind
        source: ./volumes/db/realtime.sql
        target: /docker-entrypoint-initdb.d/migrations/99-realtime.sql
      -
        type: bind
        source: ./volumes/db/webhooks.sql
        target: /docker-entrypoint-initdb.d/init-scripts/98-webhooks.sql
      -
        type: bind
        source: ./volumes/db/roles.sql
        target: /docker-entrypoint-initdb.d/init-scripts/99-roles.sql
      -
        type: bind
        source: ./volumes/db/jwt.sql
        target: /docker-entrypoint-initdb.d/init-scripts/99-jwt.sql
      -
        type: bind
        source: ./volumes/db/logs.sql
        target: /docker-entrypoint-initdb.d/migrations/99-logs.sql
      - 'supabase-db-config:/etc/postgresql-custom'
  supabase-analytics:
    image: 'supabase/logflare:1.4.0'
    healthcheck:
      test:
        - CMD
        - curl
        - 'http://127.0.0.1:4000/health'
      timeout: 5s
      interval: 5s
      retries: 10
    restart: unless-stopped
    depends_on:
      supabase-db:
        condition: service_healthy
    environment:
      - LOGFLARE_NODE_HOST=127.0.0.1
      - DB_USERNAME=supabase_admin
      - 'DB_DATABASE=${POSTGRES_DB:-postgres}'
      - 'DB_HOSTNAME=${POSTGRES_HOSTNAME:-supabase-db}'
      - 'DB_PORT=${POSTGRES_PORT:-5432}'
      - 'DB_PASSWORD=${SERVICE_PASSWORD_POSTGRES}'
      - DB_SCHEMA=_analytics
      - 'LOGFLARE_API_KEY=${SERVICE_PASSWORD_LOGFLARE}'
      - LOGFLARE_SINGLE_TENANT=true
      - LOGFLARE_SINGLE_TENANT_MODE=true
      - LOGFLARE_SUPABASE_MODE=true
      - LOGFLARE_MIN_CLUSTER_SIZE=1
      - 'POSTGRES_BACKEND_URL=postgresql://supabase_admin:${SERVICE_PASSWORD_POSTGRES}@${POSTGRES_HOSTNAME:-supabase-db}:${POSTGRES_PORT:-5432}/${POSTGRES_DB:-postgres}'
      - POSTGRES_BACKEND_SCHEMA=_analytics
      - LOGFLARE_FEATURE_FLAG_OVERRIDE=multibackend=true

And here is my .env file :

ADDITIONAL_REDIRECT_URLS=
API_EXTERNAL_URL=http://supabase-kong:8000
DISABLE_SIGNUP=false
ENABLE_ANONYMOUS_USERS=false
ENABLE_EMAIL_AUTOCONFIRM=false
ENABLE_EMAIL_SIGNUP=true
ENABLE_PHONE_AUTOCONFIRM=true
ENABLE_PHONE_SIGNUP=true
FUNCTIONS_VERIFY_JWT=false
IMGPROXY_ENABLE_WEBP_DETECTION=true
JWT_EXPIRY=3600
MAILER_SUBJECTS_CONFIRMATION=
MAILER_SUBJECTS_EMAIL_CHANGE=
MAILER_SUBJECTS_INVITE=
MAILER_SUBJECTS_MAGIC_LINK=
MAILER_SUBJECTS_RECOVERY=
MAILER_TEMPLATES_CONFIRMATION=
MAILER_TEMPLATES_EMAIL_CHANGE=
MAILER_TEMPLATES_INVITE=
MAILER_TEMPLATES_MAGIC_LINK=
MAILER_TEMPLATES_RECOVERY=
MAILER_URLPATHS_CONFIRMATION=/auth/v1/verify
MAILER_URLPATHS_EMAIL_CHANGE=/auth/v1/verify
MAILER_URLPATHS_INVITE=/auth/v1/verify
MAILER_URLPATHS_RECOVERY=/auth/v1/verify
PGRST_DB_SCHEMAS=public
POSTGRES_DB=postgres
POSTGRES_HOSTNAME=supabase-db
POSTGRES_PORT=5432
SECRET_PASSWORD_REALTIME=
SERVICE_FQDN_SUPABASEKONG=http://supabasekong-d4kgsgk.xxx.xxx.xxx.xxx.sslip.io/
SMTP_ADMIN_EMAIL=
SMTP_HOST=
SMTP_PASS=
SMTP_PORT=587
SMTP_SENDER_NAME=
SMTP_USER=
STUDIO_DEFAULT_ORGANIZATION=Default Organization
STUDIO_DEFAULT_PROJECT=Default Project

As you recommanded, I removed the POSTGRES_HOST from the .env file and added POSTGRES_HOSTNAME. And I changed the use of POSTGRES_HOST in docker-compose.yml to POSTGRES_HOSTNAME.

Also, here is what I got when I tried to manually log into postgres inside the supabase-db service :

$ psql -U supabase_admin -W
Password: 
psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  password authentication failed for user "supabase_admin"
Mortalife commented 1 week ago

@deozza Try stopping the stack, removing the associated _supabase-db-data volume and restart the stack.

You can find the volume by running docker volume ls look for the one which has <the_random_stack_string>_supabase-db-data and then running docker volume rm <name>.

For example my random stack string that is before my url etc is rwkg84s my volume is rwkg84s_supabase-db-data so I would run docker volume rm rwkg84s_supabase-db-data

Once you've done that you should be able to start the service again and hopefully the migrations will run correctly.

deozza commented 1 week ago

This worked perfectly for me. For future reference, here are the steps I did to resolve :

  1. first deploy the stack via coolify
  2. wait for the deployment to fail
  3. stop all containers
  4. in the environment variable panel, or directly in the .env file in the server : a. replace the POSTGRES_HOSTvariable with POSTGRES_HOSTNAME
  5. in the service stack panel, click on the edit compose file or directly in the docker-compose.yml file in the server a. replace all use of POSTGRES_HOST variable with POSTGRES_HOSTNAME
  6. on the server, use docker compose down --volumes to remove the old db config
  7. deploy the stack again
  8. it should work
olsoda commented 6 days ago

That workaround seemed to work for me, however supabase-rest is still unhealthy and in the API Docs of the dashboard, says public isn't accessible ...

I don't experience that problem. I would double check you've replaced all of the POSTGRES_HOST variable instances and there aren't any extra spaces etc where there shouldn't be. If it still remains, it might be worth removing the supabase db volume and restarting.

Tried removing the volumes after double checking the host values... still Rest is listed as unhealthy and also says public schema is still not available for me

diegofino15 commented 4 days ago

Hello, I face the same exact problem but none of the solutions provided worked for me.. After replacing all the POSTGRES_HOST with POSTGRES_HOSTNAME and removing the volumes, upon restart, the supabase_db successfully creates the supabase_admin role, but has no password assigned to it ?

Here are the logs of thesupabase_db :

The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with this locale configuration:
  provider:    libc
  LC_COLLATE:  C.UTF-8
  LC_CTYPE:    C.UTF-8
  LC_MESSAGES: en_US.UTF-8
  LC_MONETARY: en_US.UTF-8
  LC_NUMERIC:  en_US.UTF-8
  LC_TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... Etc/UTC
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
initdb: warning: enabling "trust" authentication for local connections
initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.
syncing data to disk ... ok
Success. You can now start the database server using:
    pg_ctl -D /var/lib/postgresql/data -l logfile start
waiting for server to start.... done
server started
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/init-scripts
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/migrate.sh
/docker-entrypoint-initdb.d/migrate.sh: running /docker-entrypoint-initdb.d/init-scripts/00-schema.sql
CREATE ROLE
REVOKE
CREATE SCHEMA
CREATE FUNCTION
REVOKE
GRANT
/docker-entrypoint-initdb.d/migrate.sh: running /docker-entrypoint-initdb.d/init-scripts/00000000000000-initial-schema.sql
CREATE PUBLICATION
CREATE ROLE
ALTER ROLE
CREATE ROLE
CREATE ROLE
GRANT ROLE
CREATE SCHEMA
CREATE EXTENSION
CREATE EXTENSION
CREATE EXTENSION
CREATE ROLE
CREATE ROLE
CREATE ROLE
CREATE ROLE
GRANT ROLE
GRANT ROLE
GRANT ROLE
GRANT ROLE
GRANT
ALTER DEFAULT PRIVILEGES
ALTER DEFAULT PRIVILEGES
ALTER DEFAULT PRIVILEGES
GRANT
ALTER ROLE
ALTER DEFAULT PRIVILEGES
ALTER DEFAULT PRIVILEGES
ALTER DEFAULT PRIVILEGES
ALTER ROLE
ALTER ROLE
/docker-entrypoint-initdb.d/migrate.sh: running /docker-entrypoint-initdb.d/init-scripts/00000000000001-auth-schema.sql
CREATE SCHEMA

CREATE TABLE

CREATE INDEX

CREATE INDEX
COMMENT

CREATE TABLE

CREATE INDEX

CREATE INDEX

CREATE INDEX
COMMENT

CREATE TABLE
COMMENT

CREATE TABLE

CREATE INDEX
COMMENT

CREATE TABLE
COMMENT
INSERT 0 7
CREATE FUNCTION
CREATE FUNCTION
CREATE FUNCTION
GRANT
CREATE ROLE
GRANT
GRANT
GRANT
ALTER ROLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
/docker-entrypoint-initdb.d/migrate.sh: running /docker-entrypoint-initdb.d/init-scripts/00000000000002-storage-schema.sql
CREATE SCHEMA
GRANT
ALTER DEFAULT PRIVILEGES
ALTER DEFAULT PRIVILEGES
ALTER DEFAULT PRIVILEGES

CREATE TABLE

CREATE INDEX

CREATE TABLE

CREATE INDEX

CREATE INDEX
ALTER TABLE
CREATE FUNCTION
CREATE FUNCTION
CREATE FUNCTION
CREATE FUNCTION

CREATE TABLE
CREATE ROLE
GRANT
GRANT
GRANT
ALTER ROLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER FUNCTION
ALTER FUNCTION
ALTER FUNCTION
ALTER FUNCTION
/docker-entrypoint-initdb.d/migrate.sh: running /docker-entrypoint-initdb.d/init-scripts/00000000000003-post-setup.sql
ALTER ROLE
ALTER ROLE
CREATE FUNCTION
CREATE EVENT TRIGGER
COMMENT
CREATE FUNCTION
COMMENT
DO
CREATE ROLE
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
/docker-entrypoint-initdb.d/migrate.sh: running /docker-entrypoint-initdb.d/init-scripts/98-webhooks.sql
psql: error: /docker-entrypoint-initdb.d/init-scripts/98-webhooks.sql: Permission denied
PostgreSQL Database directory appears to contain a database; Skipping initialization
172.31.0.7 2024-07-04 09:06:44.563 UTC [48] supabase_admin@postgres FATAL:  password authentication failed for user "supabase_admin"
172.31.0.7 2024-07-04 09:06:44.563 UTC [48] supabase_admin@postgres DETAIL:  User "supabase_admin" has no password assigned.
    Connection matched pg_hba.conf line 89: "host  all  all  172.16.0.0/12  scram-sha-256"

In the logs it says that the role doesn't have a password, so I tried to do alter role supabase_admin with password [password] on the supabase_db, and now the supabase_analytics connects to it but throws a new error :

(These are the logs of the supabase_analytics)

08:54:24.570 [notice] Application logflare exited: Logflare.Application.start(:normal, []) returned an error: shutdown: failed to start child: Logflare.SystemMetricsSup
    ** (EXIT) shutdown: failed to start child: Logflare.SystemMetrics.AllLogsLogged
        ** (EXIT) an exception was raised:
            ** (Postgrex.Error) ERROR 42P01 (undefined_table) relation "system_metrics" does not exist
    query: SELECT s0."id", s0."all_logs_logged", s0."node", s0."inserted_at", s0."updated_at" FROM "system_metrics" AS s0 WHERE (s0."node" = $1)
                (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
                (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:828: Ecto.Adapters.SQL.execute/6
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:151: Ecto.Repo.Queryable.one/3
                (logflare 1.4.0) lib/logflare/system_metrics/all_logs_logged/all_logs_logged.ex:20: Logflare.SystemMetrics.AllLogsLogged.init/1
                (stdlib 4.3.1) gen_server.erl:851: :gen_server.init_it/2
                (stdlib 4.3.1) gen_server.erl:814: :gen_server.init_it/6
{"Kernel pid terminated",application_controller,"{application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetricsSup',{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetrics.AllLogsLogged',{#{'__exception__' => true,'__struct__' => 'Elixir.Postgrex.Error',connection_id => 132,message => nil,postgres => #{code => undefined_table,file => <<\"parse_relation.c\">>,line => <<\"1392\">>,message => <<\"relation \\"system_metrics\\" does not exist\">>,pg_code => <<\"42P01\">>,position => <<\"89\">>,routine => <<\"parserOpenTable\">>,severity => <<\"ERROR\">>,unknown => <<\"ERROR\">>},query => <<\"SELECT s0.\\"id\\", s0.\\"all_logs_logged\\", s0.\\"node\\", s0.\\"inserted_at\\", s0.\\"updated_at\\" FROM \\"system_metrics\\" AS s0 WHERE (s0.\\"node\\" = $1)\">>},[{'Elixir.Ecto.Adapters.SQL',raise_sql_call_error,1,[{file,\"lib/ecto/adapters/sql.ex\"},{line,913},{error_info,#{module => 'Elixir.Exception'}}]},{'Elixir.Ecto.Adapters.SQL',execute,6,[{file,\"lib/ecto/adapters/sql.ex\"},{line,828}]},{'Elixir.Ecto.Repo.Queryable',execute,4,[{file,\"lib/ecto/repo/queryable.ex\"},{line,229}]},{'Elixir.Ecto.Repo.Queryable',all,3,[{file,\"lib/ecto/repo/queryable.ex\"},{line,19}]},{'Elixir.Ecto.Repo.Queryable',one,3,[{file,\"lib/ecto/repo/queryable.ex\"},{line,151}]},{'Elixir.Logflare.SystemMetrics.AllLogsLogged',init,1,[{file,\"lib/logflare/system_metrics/all_logs_logged/all_logs_logged.ex\"},{line,20}]},{gen_server,init_it,2,[{file,\"gen_server.erl\"},{line,851}]},{gen_server,init_it,6,[{file,\"gen_server.erl\"},{line,814}]}]}}}}},{'Elixir.Logflare.Application',start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetricsSup',{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetrics.AllLogsLogged',{#{'__exception__' => true,'__struct__' => 'Elixir.Postgrex.Error',connection_id => 132,message => nil,postgres => #{code => undefined_table,file => <<"parse_relation.c">>,line => <<"1392">>,message => <<"relation \"system_metrics\" does not exist">>,pg_code => <<"42P01">>,position => <<"89">>,routine => <<"parserOpenTable">>,severity => <<"ERROR">>,unknown => <<"ERROR">>},query => <<"SELECT s0.\"id\", s0.\"all_logs_logged\", s0.\"node\", s0.\"inserted_at\", s0.\"updated_at\" FROM \"system_metrics\" AS s0 WHERE (s0.\"node\" = $1)">>},[{'Elixir.Ecto.Adapters.SQL',raise_sql_call_error,1,[{file,"lib/ecto/adapters/sql.ex"},{line,913},{error_info,#{module => 'Elixir.Exception'}}]},{'Elixir.Ecto.Adapters.SQL',execute,6,[{file,"lib/ecto/adapters/sql.ex"},{line,828}]},{'Elixir.Ecto.Repo.Q
Crash dump is being written to: erl_crash.dump...done
LOGFLARE_NODE_HOST is: 127.0.0.1
08:54:27.231 [info] Starting migration
08:54:27.547 [error] Could not create schema migrations table. This error usually happens due to the following:
  * The database does not exist
  * The "schema_migrations" table, which Ecto uses for managing
    migrations, was defined by another library
  * There is a deadlock while migrating (such as using concurrent
    indexes with a migration_lock)
To fix the first issue, run "mix ecto.create" for the desired MIX_ENV.
To address the second, you can run "mix ecto.drop" followed by
"mix ecto.create", both for the desired MIX_ENV. Alternatively you may
configure Ecto to use another table and/or repository for managing
migrations:
    config :logflare, Logflare.Repo,
      migration_source: "some_other_table_for_schema_migrations",
      migration_repo: AnotherRepoForSchemaMigrations
The full error report is shown below.
** (Postgrex.Error) ERROR 3F000 (invalid_schema_name) no schema has been selected to create in
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
    (elixir 1.14.4) lib/enum.ex:1658: Enum."-map/2-lists^map/1-0-"/2
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:1005: Ecto.Adapters.SQL.execute_ddl/4
    (ecto_sql 3.10.1) lib/ecto/migrator.ex:738: Ecto.Migrator.verbose_schema_migration/3
    (ecto_sql 3.10.1) lib/ecto/migrator.ex:552: Ecto.Migrator.lock_for_migrations/4
    (ecto_sql 3.10.1) lib/ecto/migrator.ex:428: Ecto.Migrator.run/4
    (ecto_sql 3.10.1) lib/ecto/migrator.ex:170: Ecto.Migrator.with_repo/3
    nofile:1: (file)

I really tried everything that was said in this discussion and in others online, but could not get it to work...

gBusato commented 3 days ago

Also having the issue

roddutra commented 2 days ago

@diegofino15 and @gBusato, I can confirm that @Mortalife (here) and @deozza (here) instructions worked for me (thank you both).

To reiterate, make sure to:

  1. stop the services first
  2. delete the <uuid>_supabase-db-data volume as per @Mortalife 's instructions
  3. in Coolify's Environment Variables section, rename the POSTGRES_HOST variable to POSTGRES_HOSTNAME
  4. ⚠️ in Coolify's Service Stack > Edit Compose File, rename all instances of the POSTGRES_HOST variable to POSTGRES_HOSTNAME and save (this might make step 3 redundant but I didn't test it)
  5. restart the service

I had missed Step 4 and the stack just recreated that POSTGRES_HOST Environment Variable.