Closed Andres6936 closed 1 year ago
I chekcout the tag v0.23.06 and work.
gitpod /workspace/supabase/docker ((v0.23.06)) $ ls
deploy dev docker-compose-logging.yml docker-compose.yml README.md volumes
gitpod /workspace/supabase/docker ((v0.23.06)) $ docker compose pull
WARN[0000] The "MFA_ENABLED" variable is not set. Defaulting to a blank string.
[+] Pulling 92/13
✔ storage 9 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 1.6s
✔ kong 4 layers [⣿⣿⣿⣿] 0B/0B Pulled 1.5s
✔ db 23 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 24.9s
✔ realtime 12 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 1.5s
✔ auth 5 layers [⣿⣿⣿⣿⣿] 0B/0B Pulled 12.5s
✔ meta 9 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 15.7s
✔ studio 10 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 20.7s
✔ imgproxy 6 layers [⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 1.6s
✔ functions 3 layers [⣿⣿⣿] 0B/0B Pulled 13.4s
✔ rest 1 layers [⣿] 0B/0B Pulled 6.1s
gitpod /workspace/supabase/docker ((v0.23.06)) $ docker compose up
WARN[0000] The "MFA_ENABLED" variable is not set. Defaulting to a blank string.
WARN[0000] Found orphan containers ([supabase-analytics supabase-vector]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
[+] Running 10/10
✔ Container supabase-imgproxy Created 0.0s
✔ Container supabase-studio Recreated 0.7s
✔ Container supabase-edge-functions Recreated 0.0s
✔ Container supabase-kong Recreated 0.7s
✔ Container supabase-db Recreated 1.2s
✔ Container realtime-dev.supabase-realtime Recreated 0.5s
✔ Container supabase-auth Recreated 0.1s
✔ Container supabase-meta Recreated 0.5s
✔ Container supabase-rest Recreated 0.1s
✔ Container supabase-storage Recreated 0.5s
Attaching to realtime-dev.supabase-realtime, supabase-auth, supabase-db, supabase-edge-functions, supabase-imgproxy, supabase-kong, supabase-meta, supabase-rest, supabase-storage, supabase-studio
I run into this issue, too.
I am not sure, what was about the vector, but I did nothing special, just the typical microsoft method: switch off and on and off and on.
I also have this problem and wasn't able to fix it.
Some of the logs from docker:
supabase-analytics | ** (DBConnection.ConnectionError) connection not available and request was dropped from queue after 10976ms. This means requests are coming in and your connection pool cannot serve them fast enough. You can address this by:
supabase-analytics | 1. Ensuring your database is available and that you can connect to it
supabase-analytics | 2. Tracking down slow queries and making sure they are running fast enough
supabase-analytics | 3. Increasing the pool_size (although this increases resource consumption)
supabase-analytics | 4. Allowing requests to wait longer by increasing :queue_target and :queue_interval
supabase-analytics | See DBConnection.start_link/2 for more information
supabase-analytics | (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
supabase-analytics | (elixir 1.14.4) lib/enum.ex:1658: Enum."-map/2-lists^map/1-0-"/2
supabase-analytics | (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:1005: Ecto.Adapters.SQL.execute_ddl/4
supabase-analytics | (ecto_sql 3.10.1) lib/ecto/migrator.ex:738: Ecto.Migrator.verbose_schema_migration/3
supabase-analytics | (ecto_sql 3.10.1) lib/ecto/migrator.ex:552: Ecto.Migrator.lock_for_migrations/4
supabase-analytics | (ecto_sql 3.10.1) lib/ecto/migrator.ex:428: Ecto.Migrator.run/4
supabase-analytics | (ecto_sql 3.10.1) lib/ecto/migrator.ex:170: Ecto.Migrator.with_repo/3
supabase-analytics | nofile:1: (file)
supabase-db | 172.20.0.5 2023-08-31 14:47:15.421 UTC [74] supabase_admin@postgres FATAL: password authentication failed for user "supabase_admin"
supabase-db | 172.20.0.5 2023-08-31 14:47:15.421 UTC [74] supabase_admin@postgres DETAIL: User "supabase_admin" has no password assigned.
supabase-db | Connection matched pg_hba.conf line 89: "host all all 172.16.0.0/12 scram-sha-256"
The only change I made to .env
is changing POSTGRES_PORT
, since I already have a postgres instance running locally.
After checking out changes from #17122 , I was able to get it running by adding VECTOR_API_PORT=9001
to my .env file
that's great - thanks for the update @pierreavizou !
same error
➜ docker git:(master) docker compose up -d WARN[0000] The "MFA_ENABLED" variable is not set. Defaulting to a blank string. [+] Building 0.0s (0/0) [+] Running 13/13 ✔ Network docker_default C... 0.0s ✔ Container supabase-imgproxy Started 0.5s ✘ Container supabase-vector Error 1.0s ✔ Container supabase-db Cr... 0.0s ✔ Container supabase-analytics Created 0.0s ✔ Container supabase-edge-functions Created 0.0s ✔ Container supabase-rest Created 0.0s ✔ Container supabase-auth Created 0.0s ✔ Container supabase-studio Created 0.0s ✔ Container supabase-kong Created 0.0s ✔ Container supabase-meta Created 0.0s ✔ Container realtime-dev.supabase-realtime Created 0.0s ✔ Container supabase-storage Created 0.0s
same
Hey, I'm also facing the same issue !
same error
I had the same error and i took a look in the console. I found this error message:
I searched for the file via Docker Desktop and the file that caused the problem was in /etc/vector/vector.yml
, there i saw that the file was bind mounted. To edit the file i clicked on the Bind mounts
tab in Docker Desktop and there i could edit the file. I added the , err
as suggested in the error message and it fixed the problem for me.
Logre arrglar el /etc/vector/vector.yml asi:
#Postgres logs some messages to stderr which we map to warning severity level
db_logs:
type: remap
inputs:
- router.db
source: |-
.metadata.host = "db-default"
parsed, err = parse_regex(.event_message, r'.*(?P<level>INFO|NOTICE|WARNING|ERROR|LOG|FATAL|PANIC?):.*', numeric_groups: true)
if err != null || parsed == null {
.metadata.parsed.error_severity = "info"
} else {
.metadata.parsed.error_severity = upcase!(parsed.level)
}
.metadata.parsed.timestamp, err = from_unix_timestamp(.timestamp)
Pero no funcionaba así que realice una instalación limpia y cree mi docker-compose.yml asi
# Usage
# Start: docker compose up
# With helpers: docker compose -f docker-compose.yml -f ./dev/docker-compose.dev.yml up
# Stop: docker compose down
# Destroy: docker compose -f docker-compose.yml -f ./dev/docker-compose.dev.yml down -v --remove-orphans
version: "3.8"
services:
studio:
container_name: supabase-studio
image: supabase/studio:20230912-748fd33
restart: unless-stopped
healthcheck:
test:
[
"CMD",
"node",
"-e",
"require('http').get('http://localhost:3000/api/profile', (r) => {if (r.statusCode !== 200) throw new Error(r.statusCode)})"
]
timeout: 5s
interval: 5s
retries: 3
depends_on:
analytics:
condition: service_healthy
environment:
STUDIO_PG_META_URL: http://meta:8080
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
DEFAULT_ORGANIZATION_NAME: ${STUDIO_DEFAULT_ORGANIZATION}
DEFAULT_PROJECT_NAME: ${STUDIO_DEFAULT_PROJECT}
SUPABASE_URL: http://kong:8000
SUPABASE_PUBLIC_URL: ${SUPABASE_PUBLIC_URL}
SUPABASE_ANON_KEY: ${ANON_KEY}
SUPABASE_SERVICE_KEY: ${SERVICE_ROLE_KEY}
LOGFLARE_API_KEY: ${LOGFLARE_API_KEY}
LOGFLARE_URL: http://analytics:4000
NEXT_PUBLIC_ENABLE_LOGS: true
# Comment to use Big Query backend for analytics
NEXT_ANALYTICS_BACKEND_PROVIDER: postgres
# Uncomment to use Big Query backend for analytics
# NEXT_ANALYTICS_BACKEND_PROVIDER: bigquery
kong:
container_name: supabase-kong
image: kong:2.8.1
restart: unless-stopped
# https://unix.stackexchange.com/a/294837
entrypoint: bash -c 'eval "echo \"$$(cat ~/temp.yml)\"" > ~/kong.yml && /docker-entrypoint.sh kong docker-start'
ports:
- ${KONG_HTTP_PORT}:8000/tcp
- ${KONG_HTTPS_PORT}:8443/tcp
depends_on:
analytics:
condition: service_healthy
environment:
KONG_DATABASE: "off"
KONG_DECLARATIVE_CONFIG: /home/kong/kong.yml
# https://github.com/supabase/cli/issues/14
KONG_DNS_ORDER: LAST,A,CNAME
KONG_PLUGINS: request-transformer,cors,key-auth,acl,basic-auth
KONG_NGINX_PROXY_PROXY_BUFFER_SIZE: 160k
KONG_NGINX_PROXY_PROXY_BUFFERS: 64 160k
SUPABASE_ANON_KEY: ${ANON_KEY}
SUPABASE_SERVICE_KEY: ${SERVICE_ROLE_KEY}
DASHBOARD_USERNAME: ${DASHBOARD_USERNAME}
DASHBOARD_PASSWORD: ${DASHBOARD_PASSWORD}
volumes:
# https://github.com/supabase/supabase/issues/12661
- ./volumes/api/kong.yml:/home/kong/temp.yml:ro
auth:
container_name: supabase-auth
image: supabase/gotrue:v2.82.4
depends_on:
db:
# Disable this if you are using an external Postgres database
condition: service_healthy
analytics:
condition: service_healthy
healthcheck:
test:
[
"CMD",
"wget",
"--no-verbose",
"--tries=1",
"--spider",
"http://localhost:9999/health"
]
timeout: 5s
interval: 5s
retries: 3
restart: unless-stopped
environment:
GOTRUE_API_HOST: 0.0.0.0
GOTRUE_API_PORT: 9999
API_EXTERNAL_URL: ${API_EXTERNAL_URL}
GOTRUE_DB_DRIVER: postgres
GOTRUE_DB_DATABASE_URL: postgres://supabase_auth_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
GOTRUE_SITE_URL: ${SITE_URL}
GOTRUE_URI_ALLOW_LIST: ${ADDITIONAL_REDIRECT_URLS}
GOTRUE_DISABLE_SIGNUP: ${DISABLE_SIGNUP}
GOTRUE_JWT_ADMIN_ROLES: service_role
GOTRUE_JWT_AUD: authenticated
GOTRUE_JWT_DEFAULT_GROUP_NAME: authenticated
GOTRUE_JWT_EXP: ${JWT_EXPIRY}
GOTRUE_JWT_SECRET: ${JWT_SECRET}
GOTRUE_EXTERNAL_EMAIL_ENABLED: ${ENABLE_EMAIL_SIGNUP}
GOTRUE_MAILER_AUTOCONFIRM: ${ENABLE_EMAIL_AUTOCONFIRM}
# GOTRUE_MAILER_SECURE_EMAIL_CHANGE_ENABLED: true
# GOTRUE_SMTP_MAX_FREQUENCY: 1s
GOTRUE_SMTP_ADMIN_EMAIL: ${SMTP_ADMIN_EMAIL}
GOTRUE_SMTP_HOST: ${SMTP_HOST}
GOTRUE_SMTP_PORT: ${SMTP_PORT}
GOTRUE_SMTP_USER: ${SMTP_USER}
GOTRUE_SMTP_PASS: ${SMTP_PASS}
GOTRUE_SMTP_SENDER_NAME: ${SMTP_SENDER_NAME}
GOTRUE_MAILER_URLPATHS_INVITE: ${MAILER_URLPATHS_INVITE}
GOTRUE_MAILER_URLPATHS_CONFIRMATION: ${MAILER_URLPATHS_CONFIRMATION}
GOTRUE_MAILER_URLPATHS_RECOVERY: ${MAILER_URLPATHS_RECOVERY}
GOTRUE_MAILER_URLPATHS_EMAIL_CHANGE: ${MAILER_URLPATHS_EMAIL_CHANGE}
GOTRUE_EXTERNAL_PHONE_ENABLED: ${ENABLE_PHONE_SIGNUP}
GOTRUE_SMS_AUTOCONFIRM: ${ENABLE_PHONE_AUTOCONFIRM}
MFA_ENABLED: ${MFA_ENABLED}
rest:
container_name: supabase-rest
image: postgrest/postgrest:latest
depends_on:
db:
# Disable this if you are using an external Postgres database
condition: service_healthy
analytics:
condition: service_healthy
restart: unless-stopped
environment:
PGRST_DB_URI: postgres://authenticator:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
PGRST_DB_SCHEMAS: ${PGRST_DB_SCHEMAS}
PGRST_DB_ANON_ROLE: anon
PGRST_JWT_SECRET: ${JWT_SECRET}
PGRST_DB_USE_LEGACY_GUCS: "false"
PGRST_APP_SETTINGS_JWT_SECRET: ${JWT_SECRET}
PGRST_APP_SETTINGS_JWT_EXP: ${JWT_EXPIRY}
command: "postgrest"
realtime:
container_name: realtime-dev.supabase-realtime
image: supabase/realtime:v2.22.15
depends_on:
db:
# Disable this if you are using an external Postgres database
condition: service_healthy
analytics:
condition: service_healthy
healthcheck:
test:
[
"CMD",
"bash",
"-c",
"printf \\0 > /dev/tcp/localhost/4000"
]
timeout: 5s
interval: 5s
retries: 3
restart: unless-stopped
environment:
PORT: 4000
DB_HOST: ${POSTGRES_HOST}
DB_PORT: ${POSTGRES_PORT}
DB_USER: supabase_admin
DB_PASSWORD: ${POSTGRES_PASSWORD}
DB_NAME: ${POSTGRES_DB}
DB_AFTER_CONNECT_QUERY: 'SET search_path TO _realtime'
DB_ENC_KEY: supabaserealtime
API_JWT_SECRET: ${JWT_SECRET}
FLY_ALLOC_ID: fly123
FLY_APP_NAME: realtime
SECRET_KEY_BASE: UpNVntn3cDxHJpq99YMc1T1AQgQpc8kfYTuRgBiYa15BLrx8etQoXz3gZv1/u2oq
ERL_AFLAGS: -proto_dist inet_tcp
ENABLE_TAILSCALE: "false"
DNS_NODES: "''"
command: >
sh -c "/app/bin/migrate && /app/bin/realtime eval 'Realtime.Release.seeds(Realtime.Repo)' && /app/bin/server"
storage:
container_name: supabase-storage
image: supabase/storage-api:v0.40.4
depends_on:
db:
# Disable this if you are using an external Postgres database
condition: service_healthy
rest:
condition: service_started
imgproxy:
condition: service_started
healthcheck:
test:
[
"CMD",
"wget",
"--no-verbose",
"--tries=1",
"--spider",
"http://localhost:5000/status"
]
timeout: 5s
interval: 5s
retries: 3
restart: unless-stopped
environment:
ANON_KEY: ${ANON_KEY}
SERVICE_KEY: ${SERVICE_ROLE_KEY}
POSTGREST_URL: http://rest:3000
PGRST_JWT_SECRET: ${JWT_SECRET}
DATABASE_URL: postgres://supabase_storage_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
FILE_SIZE_LIMIT: 52428800
STORAGE_BACKEND: file
FILE_STORAGE_BACKEND_PATH: /var/lib/storage
TENANT_ID: stub
# TODO: https://github.com/supabase/storage-api/issues/55
REGION: stub
GLOBAL_S3_BUCKET: stub
ENABLE_IMAGE_TRANSFORMATION: "true"
IMGPROXY_URL: http://imgproxy:5001
volumes:
- ./volumes/storage:/var/lib/storage:z
imgproxy:
container_name: supabase-imgproxy
image: darthsim/imgproxy:v3.8.0
healthcheck:
test: [ "CMD", "imgproxy", "health" ]
timeout: 5s
interval: 5s
retries: 3
environment:
IMGPROXY_BIND: ":5001"
IMGPROXY_LOCAL_FILESYSTEM_ROOT: /
IMGPROXY_USE_ETAG: "true"
IMGPROXY_ENABLE_WEBP_DETECTION: ${IMGPROXY_ENABLE_WEBP_DETECTION}
volumes:
- ./volumes/storage:/var/lib/storage:z
meta:
container_name: supabase-meta
image: supabase/postgres-meta:v0.68.0
depends_on:
db:
# Disable this if you are using an external Postgres database
condition: service_healthy
analytics:
condition: service_healthy
restart: unless-stopped
environment:
PG_META_PORT: 8080
PG_META_DB_HOST: ${POSTGRES_HOST}
PG_META_DB_PORT: ${POSTGRES_PORT}
PG_META_DB_NAME: ${POSTGRES_DB}
PG_META_DB_USER: supabase_admin
PG_META_DB_PASSWORD: ${POSTGRES_PASSWORD}
functions:
container_name: supabase-edge-functions
image: supabase/edge-runtime:v1.16.0
restart: unless-stopped
depends_on:
analytics:
condition: service_healthy
environment:
JWT_SECRET: ${JWT_SECRET}
SUPABASE_URL: http://kong:8000
SUPABASE_ANON_KEY: ${ANON_KEY}
SUPABASE_SERVICE_ROLE_KEY: ${SERVICE_ROLE_KEY}
SUPABASE_DB_URL: postgresql://postgres:${POSTGRES_PASSWORD}@{POSTGRES_DB}:${POSTGRES_PORT}/${POSTGRES_DB}
# TODO: Allow configuring VERIFY_JWT per function. This PR might help: https://github.com/supabase/cli/pull/786
VERIFY_JWT: "${FUNCTIONS_VERIFY_JWT}"
volumes:
- ./volumes/functions:/home/deno/functions:Z
command:
- start
- --main-service
- /home/deno/functions/main
analytics:
container_name: supabase-analytics
image: supabase/logflare:1.4.0
healthcheck:
test: [ "CMD", "curl", "http://localhost:4000/health" ]
timeout: 5s
interval: 5s
retries: 10
restart: unless-stopped
depends_on:
db:
# Disable this if you are using an external Postgres database
condition: service_healthy
# Uncomment to use Big Query backend for analytics
# volumes:
# - type: bind
# source: ${PWD}/gcloud.json
# target: /opt/app/rel/logflare/bin/gcloud.json
# read_only: true
environment:
LOGFLARE_NODE_HOST: 127.0.0.1
DB_USERNAME: supabase_admin
DB_DATABASE: ${POSTGRES_DB}
DB_HOSTNAME: ${POSTGRES_HOST}
DB_PORT: ${POSTGRES_PORT}
DB_PASSWORD: ${POSTGRES_PASSWORD}
DB_SCHEMA: _analytics
LOGFLARE_API_KEY: ${LOGFLARE_API_KEY}
LOGFLARE_SINGLE_TENANT: true
LOGFLARE_SUPABASE_MODE: true
LOGFLARE_MIN_CLUSTER_SIZE: 1
RELEASE_COOKIE: cookie
# Comment variables to use Big Query backend for analytics
POSTGRES_BACKEND_URL: postgresql://supabase_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
POSTGRES_BACKEND_SCHEMA: _analytics
LOGFLARE_FEATURE_FLAG_OVERRIDE: multibackend=true
# Uncomment to use Big Query backend for analytics
# GOOGLE_PROJECT_ID: ${GOOGLE_PROJECT_ID}
# GOOGLE_PROJECT_NUMBER: ${GOOGLE_PROJECT_NUMBER}
ports:
- 4000:4000
entrypoint: |
sh -c `cat <<'EOF' > run.sh && sh run.sh
./logflare eval Logflare.Release.migrate
./logflare start --sname logflare
EOF
`
# Comment out everything below this point if you are using an external Postgres database
db:
container_name: supabase-db
image: supabase/postgres:15.1.0.118
healthcheck:
test: pg_isready -U postgres -h localhost
interval: 5s
timeout: 5s
retries: 10
depends_on:
vector:
condition: service_healthy
command:
- postgres
- -c
- config_file=/etc/postgresql/postgresql.conf
- -c
- log_min_messages=fatal # prevents Realtime polling queries from appearing in logs
restart: unless-stopped
ports:
# Pass down internal port because it's set dynamically by other services
- 127.0.0.1:${POSTGRES_PORT}:${POSTGRES_PORT}
environment:
POSTGRES_HOST: /var/run/postgresql
PGPORT: ${POSTGRES_PORT}
POSTGRES_PORT: ${POSTGRES_PORT}
PGPASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
PGDATABASE: ${POSTGRES_DB}
POSTGRES_DB: ${POSTGRES_DB}
volumes:
- ./volumes/db/realtime.sql:/docker-entrypoint-initdb.d/migrations/99-realtime.sql:Z
# Must be superuser to create event trigger
- ./volumes/db/webhooks.sql:/docker-entrypoint-initdb.d/init-scripts/98-webhooks.sql:Z
# Must be superuser to alter reserved role
- ./volumes/db/roles.sql:/docker-entrypoint-initdb.d/init-scripts/99-roles.sql:Z
# PGDATA directory is persisted between restarts
- ./volumes/db/data:/var/lib/postgresql/data:Z
# Changes required for Analytics support
- ./volumes/db/logs.sql:/docker-entrypoint-initdb.d/migrations/99-logs.sql:Z
vector:
container_name: supabase-vector
image: timberio/vector:0.28.1-alpine
healthcheck:
test:
[
"CMD",
"wget",
"--no-verbose",
"--tries=1",
"--spider",
"http://vector:9001/health"
]
timeout: 5s
interval: 5s
retries: 3
volumes:
- ./volumes/logs/vector.yml:/etc/vector/vector.yml:ro
- ${DOCKER_SOCKET_LOCATION}:/var/run/docker.sock:ro
command: [ "--config", "etc/vector/vector.yml" ]
Espero les sirva.
For me, doing a sudo docker compose restart
solved the issue.
For me, doing a
sudo docker compose restart
solved the issue.
THIS! Got the same error and doing a compose restart was the only solution that worked.
For me, doing a
sudo docker compose restart
solved the issue.THIS! Got the same error and doing a compose restart was the only solution that worked.
i can confirm same with windows 11 even on my ubuntu i didnt had this issue
For me, doing a
sudo docker compose restart
solved the issue.THIS! Got the same error and doing a compose restart was the only solution that worked.
sudo docker compose restart
This docker compose restart works for me as well for the following error
root@base:~/supabase/docker# docker compose up -d
...
...
...
⠿ kong Pulled 38.7s
⠿ 213ec9aee27d Pull complete 15.3s
⠿ a70653f7a2d5 Pull complete 15.3s
⠿ 531e3bd93090 Pull complete 36.8s
⠿ 814dd06d26c7 Pull complete 36.8s
⠿ rest Pulled 10.2s
⠿ e20ce8189632 Pull complete 8.1s
WARN[0000] The "MFA_ENABLED" variable is not set. Defaulting to a blank string.
[+] Running 12/13
⠿ Network docker_default Created 0.2s
⠿ Container supabase-vector Waiting 1.6s
⠿ Container supabase-imgproxy Started 1.0s
⠿ Container supabase-db Created 0.0s
⠿ Container supabase-analytics Created 0.0s
⠿ Container supabase-edge-functions Created 0.1s
⠿ Container supabase-rest Created 0.1s
⠿ Container supabase-meta Created 0.1s
⠿ Container realtime-dev.supabase-realtime Created 0.1s
⠿ Container supabase-kong Created 0.1s
⠿ Container supabase-studio Created 0.1s
⠿ Container supabase-auth Created 0.1s
⠿ Container supabase-storage Created 0.0s
container for service "vector" is unhealthy
root@base:~/supabase/docker# docker compose ps
WARN[0000] The "MFA_ENABLED" variable is not set. Defaulting to a blank string.
NAME COMMAND SERVICE STATUS PORTS
realtime-dev.supabase-realtime "/usr/bin/tini -s -g…" realtime created
supabase-analytics "sh -c '`cat <<EOF >…" analytics created
supabase-auth "gotrue" auth created
supabase-db "docker-entrypoint.s…" db created
supabase-edge-functions "edge-runtime start …" functions created
supabase-imgproxy "imgproxy" imgproxy running (healthy) 8080/tcp
supabase-kong "bash -c 'eval \"echo…" kong created
supabase-meta "docker-entrypoint.s…" meta created
supabase-rest "postgrest" rest created
supabase-storage "docker-entrypoint.s…" storage created
supabase-studio "docker-entrypoint.s…" studio created
supabase-vector "/usr/local/bin/vect…" vector exited (78)
For me, doing a
sudo docker compose restart
solved the issue.THIS! Got the same error and doing a compose restart was the only solution that worked.
sudo docker compose restart
This docker compose restart works for me as well for the following error
root@base:~/supabase/docker# docker compose up -d ... ... ... ⠿ kong Pulled 38.7s ⠿ 213ec9aee27d Pull complete 15.3s ⠿ a70653f7a2d5 Pull complete 15.3s ⠿ 531e3bd93090 Pull complete 36.8s ⠿ 814dd06d26c7 Pull complete 36.8s ⠿ rest Pulled 10.2s ⠿ e20ce8189632 Pull complete 8.1s WARN[0000] The "MFA_ENABLED" variable is not set. Defaulting to a blank string. [+] Running 12/13 ⠿ Network docker_default Created 0.2s ⠿ Container supabase-vector Waiting 1.6s ⠿ Container supabase-imgproxy Started 1.0s ⠿ Container supabase-db Created 0.0s ⠿ Container supabase-analytics Created 0.0s ⠿ Container supabase-edge-functions Created 0.1s ⠿ Container supabase-rest Created 0.1s ⠿ Container supabase-meta Created 0.1s ⠿ Container realtime-dev.supabase-realtime Created 0.1s ⠿ Container supabase-kong Created 0.1s ⠿ Container supabase-studio Created 0.1s ⠿ Container supabase-auth Created 0.1s ⠿ Container supabase-storage Created 0.0s container for service "vector" is unhealthy root@base:~/supabase/docker# docker compose ps WARN[0000] The "MFA_ENABLED" variable is not set. Defaulting to a blank string. NAME COMMAND SERVICE STATUS PORTS realtime-dev.supabase-realtime "/usr/bin/tini -s -g…" realtime created supabase-analytics "sh -c '`cat <<EOF >…" analytics created supabase-auth "gotrue" auth created supabase-db "docker-entrypoint.s…" db created supabase-edge-functions "edge-runtime start …" functions created supabase-imgproxy "imgproxy" imgproxy running (healthy) 8080/tcp supabase-kong "bash -c 'eval \"echo…" kong created supabase-meta "docker-entrypoint.s…" meta created supabase-rest "postgrest" rest created supabase-storage "docker-entrypoint.s…" storage created supabase-studio "docker-entrypoint.s…" studio created supabase-vector "/usr/local/bin/vect…" vector exited (78)
But i realized that vector container is not working and showing docker ps
after restart. This is really headache when someone is touching first supabase. Creating several droplets to try different variants. Resizing but eventually docker ps
is not responding. Url freezing, sorry its not good welcoming.. Going away to alternatives, best for all
For me, doing a
sudo docker compose restart
solved the issue.
+1
I am facing the same issue for a self hosted supabase. Some more backgrund info: 1) System: Ubuntu 22.04 2) Docker-containers using rootless mode 3) Using default docker and .env file 4) Image: 20230921-d657f29
When starting with docker compose up
startup fails:
[+] Running 13/13
✔ Network docker_default Created 0.2s
✔ Container supabase-imgproxy Created 0.9s
✔ Container supabase-vector Created 0.9s
✔ Container supabase-db Created 0.6s
✔ Container supabase-analytics Created 0.7s
✔ Container realtime-dev.supabase-realtime Created 0.8s
✔ Container supabase-rest Created 0.8s
✔ Container supabase-kong Created 0.8s
✔ Container supabase-meta Created 0.8s
✔ Container supabase-auth Created 0.8s
✔ Container supabase-studio Created 0.8s
✔ Container supabase-edge-functions Created 0.8s
✔ Container supabase-storage Created 0.6s
Attaching to realtime-dev.supabase-realtime, supabase-analytics, supabase-auth, supabase-db, supabase-edge-functions, supabase-imgproxy, supabase-kong, supabase-meta, supabase-rest, supabase-storage, supabase-studio, supabase-vector
supabase-imgproxy | WARNING [2023-10-17T16:01:58Z] No keys defined, so signature checking is disabled
supabase-imgproxy | WARNING [2023-10-17T16:01:58Z] No salts defined, so signature checking is disabled
supabase-imgproxy | WARNING [2023-10-17T16:01:58Z] Exposing root via IMGPROXY_LOCAL_FILESYSTEM_ROOT is unsafe
supabase-imgproxy | INFO [2023-10-17T16:01:58Z] Starting server at :5001
supabase-vector | 2023-10-17T16:01:59.059627Z INFO vector::app: Internal log rate limit configured. internal_log_rate_secs=10
supabase-vector | 2023-10-17T16:01:59.059834Z INFO vector::app: Log level is enabled. level="vector=info,codec=info,vrl=info,file_source=info,tower_limit=trace,rdkafka=info,buffers=info,lapin=info,kube=info"
supabase-vector | 2023-10-17T16:01:59.059907Z INFO vector::app: Loading configs. paths=["etc/vector/vector.yml"]
supabase-vector | 2023-10-17T16:01:59.102856Z WARN vector::config::loading: Transform "router._unmatched" has no consumers
supabase-vector | 2023-10-17T16:01:59.103118Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::sources::docker_logs: Capturing logs from now on. now=2023-10-17T16:01:59.103070781+00:00
supabase-vector | 2023-10-17T16:01:59.103191Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::sources::docker_logs: Listening to docker log events.
supabase-vector | 2023-10-17T16:01:59.242255Z INFO vector::topology::running: Running healthchecks.
supabase-vector | 2023-10-17T16:01:59.242316Z INFO vector::topology::builder: Healthcheck passed.
supabase-vector | 2023-10-17T16:01:59.242336Z INFO vector::topology::builder: Healthcheck passed.
supabase-vector | 2023-10-17T16:01:59.242347Z INFO vector::topology::builder: Healthcheck passed.
supabase-vector | 2023-10-17T16:01:59.242355Z INFO vector::topology::builder: Healthcheck passed.
supabase-vector | 2023-10-17T16:01:59.242362Z INFO vector::topology::builder: Healthcheck passed.
supabase-vector | 2023-10-17T16:01:59.242376Z INFO vector::topology::builder: Healthcheck passed.
supabase-vector | 2023-10-17T16:01:59.242384Z INFO vector::topology::builder: Healthcheck passed.
supabase-vector | 2023-10-17T16:01:59.242545Z INFO vector: Vector has started. debug="false" version="0.28.1" arch="x86_64" revision="ff15924 2023-03-06"
supabase-vector | 2023-10-17T16:01:59.242592Z ERROR source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::sources::docker_logs: Listing currently running containers failed. error=error trying to connect: Connection refused (os error 111)
supabase-vector | 2023-10-17T16:01:59.250513Z INFO vector::internal_events::api: API server running. address=0.0.0.0:9001 playground=http://0.0.0.0:9001/playground
supabase-vector | 2023-10-17T16:01:59.250532Z INFO vector::app: All sources have finished.
supabase-vector | 2023-10-17T16:01:59.250535Z INFO vector: Vector has stopped.
supabase-vector | 2023-10-17T16:01:59.250585Z INFO vector::topology::running: Shutting down... Waiting on running components. remaining_components="kong_err, db_logs, logflare_db, logflare_functions, router, logflare_kong, rest_logs, logflare_rest, auth_logs, storage_logs, logflare_auth, logflare_storage, realtime_logs, logflare_realtime, project_logs, kong_logs" time_remaining="59 seconds left"
dependency failed to start: container supabase-vector exited (0)
Despite this, supabase is running well except for logflare (which depends on supabase-vector, so this makes sense).
As for the others above, running docker compose restart
seems to work, as there is no error shown:
[+] Restarting 12/12
✔ Container supabase-storage Started 2.1s
✔ Container supabase-studio Started 9.7s
✔ Container supabase-db Started 7.2s
✔ Container supabase-meta Started 4.0s
✔ Container supabase-analytics Started 10.0s
✔ Container realtime-dev.supabase-realtime Started 2.9s
✔ Container supabase-rest Started 2.4s
✔ Container supabase-edge-functions Started 6.0s
✔ Container supabase-kong Started 3.8s
✔ Container supabase-auth Started 4.7s
✔ Container supabase-imgproxy Started 11.3s
✔ Container supabase-vector Started 6.9s
But actually when running docker ps
it becomse obvious that supabase-vector is still not running.:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7da4155c4629 supabase/storage-api:v0.40.4 "docker-entrypoint.s…" 2 minutes ago Up 49 seconds (healthy) 5000/tcp supabase-storage
25c3560ecd8b supabase/gotrue:v2.99.0 "gotrue" 2 minutes ago Up 49 seconds (healthy) supabase-auth
ead938e4844d supabase/studio:20230921-d657f29 "docker-entrypoint.s…" 2 minutes ago Up 53 seconds (healthy) 3000/tcp supabase-studio
548a6ec46112 kong:2.8.1 "bash -c 'eval \"echo…" 2 minutes ago Up 58 seconds (healthy) 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp, 8001/tcp, 0.0.0.0:8443->8443/tcp, :::8443->8443/tcp, 8444/tcp supabase-kong
f16c60c00b24 supabase/postgres-meta:v0.68.0 "docker-entrypoint.s…" 2 minutes ago Up 58 seconds (healthy) 8080/tcp supabase-meta
2bc27c96d91d supabase/realtime:v2.10.1 "/usr/bin/tini -s -g…" 2 minutes ago Up 59 seconds (healthy) realtime-dev.supabase-realtime
bbd93453a9b3 supabase/edge-runtime:v1.18.1 "edge-runtime start …" 2 minutes ago Up 57 seconds supabase-edge-functions
9b6b80efe239 postgrest/postgrest:v11.2.0 "postgrest" 2 minutes ago Up About a minute 3000/tcp supabase-rest
4f2b4e4aabf9 supabase/logflare:1.4.0 "sh -c '`cat <<EOF >…" 2 minutes ago Up 52 seconds (healthy) 0.0.0.0:4000->4000/tcp, :::4000->4000/tcp supabase-analytics
5e2c81933f45 supabase/postgres:15.1.0.117 "docker-entrypoint.s…" 2 minutes ago Up 55 seconds (healthy) 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp supabase-db
fb5421fbe1ae darthsim/imgproxy:v3.8.0 "imgproxy" 3 minutes ago Up 51 seconds (healthy) 8080/tcp supabase-imgproxy
Does anybody have the idea, where the problem is coming from?
I encountered the dependency failed to start: container supabase-vector is unhealthy error and discovered that the issue was related to incorrect volume path mappings in the docker-compose.yml
file. Here’s the step-by-step solution that worked for me, I hope it helps someone else:
Update the Volume Paths: In your docker-compose.yml
, ensure that ALL the volume paths are mapped correctly to the actual paths on the host machine. Here is just 1 example, change:
volumes:
- ./volumes/logs/vector.yml:/etc/vector/vector.yml:ro
to:
volumes:
- /home/your_username/supabase/docker/volumes/logs/vector.yml:/etc/vector/vector.yml:ro
Create Directory Structure: Create the necessary directory structure as referenced by these volumes. Make sure that all required files, such as vector.yml
, kong.yml
, etc., actually exist on your server. Simply running the docker-compose.yml
does not create these files for you, you are expected to have them in-place before trying to deploy.
Clone the Repository (Optional): If you prefer not to create the directory and files manually, clone the Supabase repository to your local server with the following command:
git clone --depth 1 https://github.com/supabase/supabase
Deploy Using Portainer (Optional): With the correct volume mappings in your docker-compose.yml
and the .env
file you now have from cloning the repo (or just browsing to it here on Github and then saving the files off), import the docker-compose.yml
and .env
files into Portainer and deploy the stack (if you are using Portainer like I am).
This approach resolved the unhealthy container issue for me, and I was able to deploy the stack successfully. I hope this helps anyone facing the same problem!
sudo docker compose restart
run cmd as administrator (via right click)
Solution
Linux/MacOS:
sudo docker compose restart
Windows:
run cmd as administrator (via right click)
I don't think it's a solution, but a hotfix.
Maybe this ticket should be reopened again
I have reverted the supabase GitHub repository back to its original state, which fixed the 'vectors' issue.
For those still facing the issue even after a docker compose restart
, it's possible the DOCKER_SOCKET_LOCATION
is incorrect for your OS (I'm on fedora). You can confirm by running a docker context inspect
and looking for Endpoints > docker.Host value:
[
{
"Name": "rootless",
"Metadata": {
"Description": "Rootless mode"
},
"Endpoints": {
"docker": {
"Host": "unix:///run/user/1000/docker.sock",
"SkipTLSVerify": false
}
},
"TLSMaterial": {},
"Storage": {
"MetadataPath": "/home/nbilal/.docker/contexts/meta/12b961af5feb3e9d39f93b2cefb9a1a944f18d02cca0cac2f04f5a982240605f",
"TLSPath": "/home/nbilal/.docker/contexts/tls/12b961af5feb3e9d39f93b2cefb9a1a944f18d02cca0cac2f04f5a982240605f"
}
}
]
For me that value was /run/user/1000/docker.sock
which is different from the /var/run/docker.sock
value set for the DOCKER_SOCKET_LOCATION
variable in .env.example
. In other words, updating line 101 in my .env file fixed the issue:
OLD:
DOCKER_SOCKET_LOCATION=/var/run/docker.sock
NEW:
DOCKER_SOCKET_LOCATION=/run/user/1000/docker.sock
Then docker compose down
followed by docker compose up -d
resolved the issue for good!
I hope that helps!
still having this issue, none of these work so far. issue seems to be with logflare. it wants an API key but i dont know how to get it and commenting it out doesnt seem to work either.
Workaround for me, i disabled analytics in config.toml file
Bug report
Describe the bug
Self Hosting fail to start when docker compose is used.
To Reproduce
Steps to reproduce the behavior, please provide code snippets or a repository:
Expected behavior
Self Hosting start without problems
Screenshots
System information
Additional context
The instance is new and is default configuration