Closed probablykasper closed 5 months ago
That's great, though personally not looking to self host, just running Supabase for local development
The local development consume a lot of resources! I don't know why but it seems like there is an issue with the "authentication" part The issue executed in a loop ..
Don't forget to tell us your experience with it!
I think the OP meant that when running a local dev instance using supabase start
, cli
can use podman
to spin up the containers instead of using docker
.
The main reason for this is that afaik on many Linux systems (including mine), running docker
requires sudo privileges, so any command that requires interaction with the containers (e.g. supabase start
, supabase db remote commit
, etc.) must be ran using sudo
, which can cause a variety of confusing errors such as unable to find credentials because the developer ran supabase init
and supabase login
as non-root. Also, I think entering the root password every five minutes is probably not a pleasant experience.
I recently discovered that cli
has the capability to use podman
. It seems like under the hood, cli
uses docker compose
to start & orchestrate the containers, so all we have to do is to let docker compose
know that we want to run our containers using podman
. On Linux, the steps are similar to:
systemctl --user enable podman.socket
systemctl --user start podman.socket
systemctl --user status podman.socket
export DOCKER_HOST=unix:///run/user/$(id -u)/podman/podman.sock
Now the only problem I have is when I run supabase start
, it gives me Error: unable to upgrade to tcp, received 409
. This seems like a permission issue on the podman
side.
Instructions for running docker compose
using podman
are from here: https://fedoramagazine.org/use-docker-compose-with-podman-to-orchestrate-containers-on-fedora/
I think that the biggest issue running many docker containers on local machine is possibility to run out of memory. Podman is much more lightweight. Is there any other reason to use it instead of Docker?
I'm getting this error running supabase start
with Podman:
Seeding data supabase/seed.sql...
Error: Error response from daemon: network name supabase_network_app already used: network already exists
Running with --debug
doesn't provide any useful information. Anyone has any idea? π
After the error above, running podman ps
shows no running containers and running podman network ls
only shows the default (?) network:
NETWORK ID NAME DRIVER
2f259bab93aa podman bridge
The network errors are coming from here. Specifically, errdefs.IsConflict
doesn't actually return true
when a network with same name already exists with Podman, which raises some questions:
Why does this presumably work with Docker but not Podman?
I'm not entirely sure, but I would guess that errdefs.IsConflict
incidentally returns true
when using Docker but not Podman. The errdefs
does warn users:
Packages should not reference these interfaces directly, only implement them.
Why don't I see any networks with podman network ls
?
Again I'm not entirely sure, but DockerNetworkCreateIfNotExists
is ran once before the error occurs with the same network ID and doesn't run into any issues. It's only when it's called the second time does it report an error. My guess is there's some tear down code that deletes the network if it fails to start, but I didn't bother looking for that code.
How can this be fixed?
Quite easily actually. Just replace errdefs.IsConflict
with a more suitable alternative, such as NetworkInspect
in internal/utils/docker.go
:
func DockerNetworkCreateIfNotExists(ctx context.Context, networkId string) error {
+ existing, err := Docker.NetworkInspect(
+ ctx,
+ networkId,
+ types.NetworkInspectOptions{},
+ )
+
+ // if network already exists, abort
+ if existing.ID != "" && err == nil {
+ return nil
+ }
+
_, err = Docker.NetworkCreate(
ctx,
networkId,
types.NetworkCreate{
CheckDuplicate: true,
Labels: map[string]string{
"com.supabase.cli.project": Config.ProjectId,
"com.docker.compose.project": Config.ProjectId,
},
},
)
- // if error is network already exists, no need to propagate to user
- if errdefs.IsConflict(err) {
- return nil
- }
return err
}
The following paragraph suggests that errdef.IsConflict
is the intended use case.
To check if a particular error implements one of these interfaces, there are helper functions provided (e.g.
Is<SomeError>
) which can be used rather than asserting the interfaces directly.
I suspect the problem is with podman not returning the same http status code as docker daemon when there's a network name conflict. Do you mind checking with podman upstream if this is indeed the case? If so, would they be accepting PRs to address it?
I've released a fix for the podman network issue. It's available on the beta release channel: npx supabase@beta start
Let me know if there are other incompatibilities with podman that I can help iron out.
I tried npx supabase@beta start
but got this error:
node:internal/process/promises:288
triggerUncaughtException(err, true /* fromPromise */);
^
Error: getaddrinfo EAI_AGAIN supabase_db_website
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:107:26) {
errno: -3001,
code: 'EAI_AGAIN',
syscall: 'getaddrinfo',
hostname: 'supabase_db_website'
}
Node.js v18.16.0
error running container: exit 1
Try rerunning the command with --debug to troubleshoot the error.
I'm trying to run the Supabase CLI on Replit, which uses Nix and doesn't give the sudo
access required to run Docker, so I'd also like Podman support.
@sweatybridge It looks like your PR went out in the 1.71.1 release.
The main Nix channel (23.05
) only seems to have 1.62.3, but unstable
has 1.75.6, so I will try that.
@Nezteb sure, let me know how it goes. The nix package is maintained by the community but we are more than happy to help.
For my particular usecase I don't think I will be able to ue Podman because I'm in a Replit unprivileged container. I will have to try running the Supabase CLI with Podman on my host laptop to confirm that your PR fixed things, but that will take me a few days. π
I tried
npx supabase@beta start
but got this error:node:internal/process/promises:288 triggerUncaughtException(err, true /* fromPromise */); ^ Error: getaddrinfo EAI_AGAIN supabase_db_website at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:107:26) { errno: -3001, code: 'EAI_AGAIN', syscall: 'getaddrinfo', hostname: 'supabase_db_website' } Node.js v18.16.0 error running container: exit 1 Try rerunning the command with --debug to troubleshoot the error.
Got this error, too when calling DOCKER_HOST=unix:///run/user/$(id -u)/podman/podman.sock supabase start
, using podman in arch. The --debug
flag doesn't show any more messages.
EDIT: fixed the DNS error by installing netavark
and aardvark-dns
.
@sweatybridge I am still experiencing the same issue with the beta release.
Details:
Need to install the following packages:
supabase@1.99.6
Ok to proceed? (y) y
Supabase CLI 1.99.6
Error response from daemon: error configuring network namespace for container 8f754093eeeea5228d757190ad56f343a9de3864746277dab1cbe97672550378: CNI network "supabase_network_MyProj" not found
I then try and create the network manually:
podman network create --label com.supabase.cli.project=MyProj --label com.docker.compose.project=MyProj supabase_network_MyProj
Then trying to run again:
Supabase CLI 1.99.6
Error response from daemon: the network name supabase_network_MyProj is already used
I am on ubuntu with podman 3.4.4, which is a bit older at this point... so it is possible podman version is a variable as well.
@addisonj I'm experiencing the same thing with 3.4.4 It seems it's been patched in recent versions.
Depending on your setup, you could to build it from source or install from Kubic's repo. The instructions for both are on podman's install page.
There were a few more errors after this. Something like CNI bridge not working (sorry, I did not save the error) after upgrading to podman 4. Fixed by installing containernetworking-plugins
.
Then there was database is not healthy
when running supabase start (a more descriptive error message would be very helpful). docker logs -f supabase_db_project
showed this was caused by pgsodium_root.key: Permission denied
. I'm hoping there's another way to fix this but for now supabase starts with podman.service as root.
pgsodium_root.key: Permission denied
The permission error could be due to named volumes being mounted from host to container. If you know the path to podman volume directory, try giving it read / write permission to postgres
user.
I'm not even getting any errors when I attempt to start the containers. I just get
service not healthy: [realtime-dev.supabase_realtime_sylvester supabase_pg_meta_sylvester supabase_studio_sylvester]
Debug logging shows nothing useful beyond the standard output. The container logs for those three services are dumped but don't really show anything either: realtime-dev.supabase_realtime_sylvester:
realtime-dev.supabase_realtime_sylvester container logs:
02:55:23.311 [info] == Running 20210706140551 Realtime.Repo.Migrations.CreateTenants.change/0 forward
02:55:23.315 [info] create table tenants
02:55:23.322 [info] create index tenants_external_id_index
02:55:23.328 [info] == Migrated 20210706140551 in 0.0s
02:55:23.390 [info] == Running 20220329161857 Realtime.Repo.Migrations.AddExtensionsTable.change/0 forward
02:55:23.390 [info] create table extensions
02:55:23.396 [info] create index extensions_tenant_external_id_type_index
02:55:23.401 [info] == Migrated 20220329161857 in 0.0s
02:55:23.408 [info] == Running 20220410212326 Realtime.Repo.Migrations.AddTenantMaxEps.up/0 forward
02:55:23.408 [info] alter table tenants
02:55:23.411 [info] == Migrated 20220410212326 in 0.0s
02:55:23.414 [info] == Running 20220506102948 Realtime.Repo.Migrations.RenamePollIntervalToPollIntervalMs.up/0 forward
02:55:23.420 [warning] Replica region not found, defaulting to Realtime.Repo
02:55:23.456 [debug] QUERY OK source="extensions" db=0.3ms
SELECT e0."id", e0."type", e0."settings", e0."tenant_external_id", e0."inserted_at", e0."updated_at" FROM "extensions" AS e0 WHERE (e0."type" = $1) ["postgres_cdc_rls"]
02:55:23.456 [info] == Migrated 20220506102948 in 0.0s
02:55:23.462 [info] == Running 20220527210857 Realtime.Repo.Migrations.AddExternalIdUniqIndex.change/0 forward
02:55:23.462 [info] execute "alter table tenants add constraint uniq_external_id unique (external_id)"
02:55:23.465 [info] == Migrated 20220527210857 in 0.0s
02:55:23.468 [info] == Running 20220815211129 Realtime.Repo.Migrations.NewMaxEventsPerSecondDefault.change/0 forward
02:55:23.469 [info] alter table tenants
02:55:23.471 [info] == Migrated 20220815211129 in 0.0s
02:55:23.475 [info] == Running 20220815215024 Realtime.Repo.Migrations.SetCurrentMaxEventsPerSecond.change/0 forward
02:55:23.475 [info] execute "update tenants set max_events_per_second = 1000"
02:55:23.481 [info] == Migrated 20220815215024 in 0.0s
02:55:23.490 [info] == Running 20220818141501 Realtime.Repo.Migrations.ChangeLimitsDefaults.change/0 forward
02:55:23.491 [info] alter table tenants
02:55:23.492 [info] == Migrated 20220818141501 in 0.0s
02:55:23.498 [info] == Running 20221018173709 Realtime.Repo.Migrations.AddCdcDefault.up/0 forward
02:55:23.498 [info] alter table tenants
02:55:23.499 [info] == Migrated 20221018173709 in 0.0s
02:55:23.502 [info] == Running 20221102172703 Realtime.Repo.Migrations.RenamePgType.up/0 forward
02:55:23.502 [info] execute "update extensions set type = 'postgres_cdc_rls'"
02:55:23.503 [info] == Migrated 20221102172703 in 0.0s
02:55:23.506 [info] == Running 20221223010058 Realtime.Repo.Migrations.DropTenantsUniqExternalIdIndex.change/0 forward
02:55:23.506 [info] execute "ALTER TABLE IF EXISTS tenants DROP CONSTRAINT IF EXISTS uniq_external_id"
02:55:23.508 [info] == Migrated 20221223010058 in 0.0s
02:55:23.513 [info] == Running 20230110180046 Realtime.Repo.Migrations.AddLimitsFieldsToTenants.change/0 forward
02:55:23.513 [info] alter table tenants
02:55:23.514 [info] == Migrated 20230110180046 in 0.0s
02:55:23.518 [info] == Running 20230810220907 Realtime.Repo.Migrations.AlterTenantsTableColumnsToText.change/0 forward
02:55:23.518 [info] alter table tenants
02:55:23.522 [info] == Migrated 20230810220907 in 0.0s
02:55:23.526 [info] == Running 20230810220924 Realtime.Repo.Migrations.AlterExtensionsTableColumnsToText.change/0 forward
02:55:23.526 [info] alter table extensions
02:55:23.529 [info] == Migrated 20230810220924 in 0.0s
02:55:23.532 [info] == Running 20231024094642 :"Elixir.Realtime.Repo.Migrations.Add-tenant-suspend-flag".change/0 forward
02:55:23.532 [info] alter table tenants
02:55:23.533 [info] == Migrated 20231024094642 in 0.0s
02:55:24.360 [debug] QUERY OK db=1.8ms queue=118.1ms idle=0.0ms
begin []
02:55:24.384 [debug] QUERY OK source="tenants" db=0.4ms
SELECT t0."id", t0."name", t0."external_id", t0."jwt_secret", t0."postgres_cdc_default", t0."max_concurrent_users", t0."max_events_per_second", t0."max_bytes_per_second", t0."max_channels_per_client", t0."max_joins_per_second", t0."suspend", t0."inserted_at", t0."updated_at" FROM "tenants" AS t0 WHERE (t0."external_id" = $1) ["realtime-dev"]
02:55:24.439 [debug] QUERY OK db=1.7ms
INSERT INTO "tenants" ("external_id","jwt_secret","max_bytes_per_second","max_channels_per_client","max_concurrent_users","max_events_per_second","max_joins_per_second","name","suspend","inserted_at","updated_at","id") VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12) ["realtime-dev", "iNjicxc4+llvc9wovDvqymwfnj9teWMlyOIbJ8Fh6j2WNU8CIJ2ZgjR6MUIKqSmeDmvpsKLsZ9jgXJmQPpwL8w==", 100000, 100, 200, 100, 100, "realtime-dev", false, ~N[2023-11-27 02:55:24], ~N[2023-11-27 02:55:24], <<165, 86, 32, 78, 29, 51, 75, 35, 174, 167, 248, 212, 42, 119, 216, 160>>]
02:55:24.447 [debug] QUERY OK db=5.7ms
INSERT INTO "extensions" ("settings","tenant_external_id","type","inserted_at","updated_at","id") VALUES ($1,$2,$3,$4,$5,$6) [%{"db_host" => "f23Hm+RKKCxIK6ehAjN45GlQ0FDUt0uPXpwEKlJrfrg=", "db_name" => "sWBpZNdjggEPTQVlI52Zfw==", "db_password" => "sWBpZNdjggEPTQVlI52Zfw==", "db_port" => "+enMDFi1J/3IrrquHHwUmA==", "db_user" => "uxbEq/zz8DXVD53TOI1zmw==", "ip_version" => 4, "poll_interval_ms" => 100, "poll_max_changes" => 100, "poll_max_record_bytes" => 1048576, "publication" => "supabase_realtime", "region" => "us-east-1", "slot_name" => "supabase_realtime_replication_slot", "ssl_enforced" => false}, "realtime-dev", "postgres_cdc_rls", ~N[2023-11-27 02:55:24], ~N[2023-11-27 02:55:24], <<16, 110, 81, 172, 26, 192, 71, 186, 129, 66, 231, 218, 92, 169, 236, 206>>]
02:55:24.457 [debug] QUERY OK db=10.4ms
commit []
02:55:25.607 [info] Elixir.Realtime.SignalHandler is being initialized...
02:55:25.607 [notice] SYN[realtime@127.0.0.1] Adding node to scope <Elixir.Realtime.Tenants.Connect>
02:55:25.607 [notice] SYN[realtime@127.0.0.1] Creating tables for scope <Elixir.Realtime.Tenants.Connect>
02:55:25.608 [notice] SYN[realtime@127.0.0.1|registry<Elixir.Realtime.Tenants.Connect>] Discovering the cluster
02:55:25.608 [notice] SYN[realtime@127.0.0.1|pg<Elixir.Realtime.Tenants.Connect>] Discovering the cluster
02:55:25.608 [notice] SYN[realtime@127.0.0.1] Adding node to scope <users>
02:55:25.608 [notice] SYN[realtime@127.0.0.1] Creating tables for scope <users>
02:55:25.608 [notice] SYN[realtime@127.0.0.1|registry<users>] Discovering the cluster
02:55:25.608 [notice] SYN[realtime@127.0.0.1|pg<users>] Discovering the cluster
02:55:25.608 [notice] SYN[realtime@127.0.0.1] Adding node to scope <Elixir.RegionNodes>
02:55:25.608 [notice] SYN[realtime@127.0.0.1] Creating tables for scope <Elixir.RegionNodes>
02:55:25.608 [notice] SYN[realtime@127.0.0.1|registry<Elixir.RegionNodes>] Discovering the cluster
02:55:25.608 [notice] SYN[realtime@127.0.0.1|pg<Elixir.RegionNodes>] Discovering the cluster
02:55:25.608 [warning] Replica region not found, defaulting to Realtime.Repo
02:55:28.607 [debug] Tzdata polling for update.
02:55:28.813 [info] tzdata release in place is from a file last modified Fri, 22 Oct 2021 02:20:47 GMT. Release file on server was last modified Tue, 28 Mar 2023 20:25:39 GMT.
02:55:28.813 [debug] Tzdata downloading new data from https://data.iana.org/time-zones/tzdata-latest.tar.gz
02:55:28.907 [debug] Tzdata data downloaded. Release version 2023c.
02:55:29.447 [info] Tzdata has updated the release from 2021e to 2023c
02:55:29.447 [debug] Tzdata deleting ETS table for version 2021e
02:55:29.449 [debug] Tzdata deleting ETS table file for version 2021e
02:55:34.980 [info] Running RealtimeWeb.Endpoint with cowboy 2.10.0 at :::4000 (http)
02:55:34.988 [info] Access RealtimeWeb.Endpoint at http://realtime.fly.dev
02:55:34.989 [notice] SYN[realtime@127.0.0.1] Adding node to scope <Elixir.PostgresCdcStream>
02:55:34.989 [notice] SYN[realtime@127.0.0.1] Creating tables for scope <Elixir.PostgresCdcStream>
02:55:34.989 [notice] SYN[realtime@127.0.0.1|registry<Elixir.PostgresCdcStream>] Discovering the cluster
02:55:34.989 [notice] SYN[realtime@127.0.0.1|pg<Elixir.PostgresCdcStream>] Discovering the cluster
02:55:34.990 [notice] SYN[realtime@127.0.0.1] Adding node to scope <Elixir.Extensions.PostgresCdcRls>
02:55:34.990 [notice] SYN[realtime@127.0.0.1] Creating tables for scope <Elixir.Extensions.PostgresCdcRls>
02:55:34.990 [notice] SYN[realtime@127.0.0.1|registry<Elixir.Extensions.PostgresCdcRls>] Discovering the cluster
02:55:34.990 [notice] SYN[realtime@127.0.0.1|pg<Elixir.Extensions.PostgresCdcRls>] Discovering the cluster
supabase_pg_meta_sylvester:
supabase_pg_meta_sylvester container logs:
> @supabase/postgres-meta@0.0.0-automated start
> node dist/server/server.js
(node:21) ExperimentalWarning: Importing JSON modules is an experimental feature. This feature could change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
{"level":"info","time":"2023-11-27T02:55:24.680Z","pid":21,"hostname":"3a9950ac0e89","msg":"Server listening at http://0.0.0.0:8080"}
{"level":"info","time":"2023-11-27T02:55:24.688Z","pid":21,"hostname":"3a9950ac0e89","msg":"Server listening at http://0.0.0.0:8081"}
supabsae_studio_sylvester:
supabase_studio_sylvester container logs:
β² Next.js 13.5.3
- Local: http://localhost:3000
- Network: http://0.0.0.0:3000
β Ready in 523ms
This is with Podman 4 and after installing containernetworking-plugins
Like other users above, after the "service not healthy" message podman ps
shows no running Supabase containers and podman network ls
only shows the default bridge
Adding my notes to the conversation from trying to get supabase
working with podman on macOS 14 ...
host.docker.internal:host-gateway
option from supabase
CLI because podman doesn't support this option. Instead they've opted to automatically add a host.docker.internal
entry to every container's /etc/hosts
by default. See https://github.com/containers/podman/issues/10878.diff --git a/internal/db/start/start.go b/internal/db/start/start.go
index dd7558f..ced03d5 100644
--- a/internal/db/start/start.go
+++ b/internal/db/start/start.go
@@ -95,7 +95,6 @@ func NewHostConfig() container.HostConfig {
utils.DbId + ":/var/lib/postgresql/data",
utils.ConfigId + ":/etc/postgresql-custom",
},
- ExtraHosts: []string{"host.docker.internal:host-gateway"},
})
return hostConfig
}
diff --git a/internal/functions/serve/serve.go b/internal/functions/serve/serve.go
index dc8fb66..66199ab 100644
--- a/internal/functions/serve/serve.go
+++ b/internal/functions/serve/serve.go
@@ -163,7 +163,6 @@ EOF
},
start.WithSyslogConfig(container.HostConfig{
Binds: binds,
- ExtraHosts: []string{"host.docker.internal:host-gateway"},
}),
network.NetworkingConfig{
EndpointsConfig: map[string]*network.EndpointSettings{
Error response from daemon: lsetxattr /Users/wryfi/src/github.com/wryfi/supaflut/supabase/functions: operation not supported
. It appears that one of the containers is trying to set an extended attribute β which macOS doesn't support but podman doesn't prevent βΒ on a volume-mounted folder. But I can't readily tell which container is responsible and I hit my time box for investigating further.Hope to see the supabase
CLI fully functioning with podman soon!
Also: for anyone just looking for a docker desktop alternative, colima is working for me with supabase with the docker runtime that's available in macports/homebrew.
@wryfi Using your diff, on Linux I was able to start the local containers using supabase start --ignore-health-check
. Even though podman ps
shows that realtime is unhappy:
> podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9f4fdb0f28ac public.ecr.aws/supabase/postgres:15.1.0.117 postgres -c confi... About a minute ago Up About a minute (healthy) 0.0.0.0:54322->5432/tcp supabase_db_buoj
fd3d49376f2f public.ecr.aws/supabase/kong:2.8.1 About a minute ago Up About a minute (healthy) 0.0.0.0:54321->8000/tcp supabase_kong_buoj
414830bb3ffc public.ecr.aws/supabase/gotrue:v2.99.0 gotrue About a minute ago Up About a minute (healthy) supabase_auth_buoj
4d5a45a185b8 public.ecr.aws/supabase/inbucket:3.0.3 -logjson About a minute ago Up About a minute (healthy) 0.0.0.0:54324->9000/tcp supabase_inbucket_buoj
2d059c4c9cbc public.ecr.aws/supabase/realtime:v2.25.35 /bin/sh -c /app/b... About a minute ago Up About a minute (unhealthy) realtime-dev.supabase_realtime_buoj
c2389193d94b public.ecr.aws/supabase/postgrest:v11.2.2 /bin/postgrest About a minute ago Up About a minute supabase_rest_buoj
b387e3033dd3 public.ecr.aws/supabase/storage-api:v0.43.11 node dist/server.... About a minute ago Up About a minute (healthy) supabase_storage_buoj
14c75c57f04d public.ecr.aws/supabase/imgproxy:v3.8.0 imgproxy About a minute ago Up About a minute (healthy) storage_imgproxy_buoj
8b3f6dac3aa2 public.ecr.aws/supabase/edge-runtime:v1.23.0 About a minute ago Up About a minute supabase_edge_runtime_buoj
f347e5c9df83 public.ecr.aws/supabase/postgres-meta:v0.75.0 npm run start About a minute ago Up About a minute (unhealthy) supabase_pg_meta_buoj
848ac49c7b0e public.ecr.aws/supabase/studio:20231123-64a766a node apps/studio/... About a minute ago Up About a minute (unhealthy) 0.0.0.0:54323->3000/tcp supabase_studio_buoj
I wrote some quick tests to test realtime and realtime is still working. I inspected the logs using podman logs realtime-dev.supabase_realtime_buoj
and I am not seeing anything suspicious.
I think there may be some problems with the healthchecks. Will report back when I encounter any issues. My podman
installation has netavark
, aardvark-dns
, and cni-plugins
installed as well, no sure if that would make a difference.
Also @wryfi your issue with lsexattr
looks like containers/podman#13631. You may want to check relevant commits at the bottom of that issue to see if your version of podman
contains the relevant fix.
As of today, I was still unable to run supabase env on my M2 Mac. What I did:
# install podman, initialize the machine
sudo podman-mac-helper install
podman machine stop
podman machine set --rootful
podman machine start
export DOCKER_HOST="unix:///var/run/docker.sock"
supabase start
but I end up with (and gave up):
failed to start docker container: Error response from daemon: failed to create new hosts file: unable to replace "host-gateway" of host entry "host.docker.internal:host-gateway": host containers internal IP address is empty
It looks to be working on my Linux machine with rootless podman running as my user, but only when I ignore the health checks. When I start with health checks - it starts, I can browse around in the web UI, then it disappears. Maybe the healthchecks are incorrectly being reported as not healthy, and then the command decides to shut down the containers, and ignoring the health checks makes it work?
Maybe the healthchecks are incorrectly being reported as not healthy, and then the command decides to shut down the containers, and ignoring the health checks makes it work?
That is possible. Is there any logs you can share when start fails due to health check?
The podman health check issues have been fixed by https://github.com/supabase/cli/pull/2359
You can try it with cli beta release.
npx supabase@beta start
I will try to address the extra host issue before the stable release next week.
I have addressed the other podman compatibility issues mentioned in https://github.com/supabase/cli/issues/265#issuecomment-1832282812.
Please give the beta release a spin and let me know if anything is still broken.
@sweatybridge Thanks it's working for me at least on mac but it might need some work.
Health check fails for pg_meta and studio but this can be bypassed with --ignore-health-check
so it's not critical. Also seems like studio is running just fine so no idea why the health check is failing. Also analytics has to be disabled or it won't start.
Regardless thanks for getting this working.
For pgmeta health check, could you show the output of podman inspect --format '{{json .Config.Healthcheck}}' supabase_pg_meta_<id> | jq
?
Analytics will require https://github.com/supabase/cli/pull/2061 to be merged.
@sweatybridge Hello!
Thank you for the patches :) Unfortunately, I am still getting one of the errors above with npx supabase@1.178.2 start
:
Stopping containers...
failed to start docker container: Error response from daemon: failed to create new hosts file: unable to replace "host-gateway" of host entry "host.docker.internal:host-gateway": host containers internal IP address is empty
Try rerunning the command with --debug to troubleshoot the error.
Is there anything I should adjust with my configuration?
EDIT: I am in a Linux environment :penguin:
@Hoolean which version of podman are you using? Based on upstream issue https://github.com/containers/podman/issues/14390#issuecomment-1693194203, It should be fixed in v4.7 and above.
@sweatybridge Thanks for the speedy reply :)
Podman appears to be up-to-date:
$ podman --version
podman version 5.1.1
An apology though: I may have had an environment issue yesterday anyway, as now the error message is different:
$ npx supabase@1.178.2 start
Stopping containers...
failed to start docker container: Error response from daemon: setting up Pasta: pasta failed with exit code 1:
Couldn't get any nameserver address
Failed to open() /dev/net/tun: No such device
Failed to set up tap device in namespace
Try rerunning the command with --debug to troubleshoot the error.
the problem still occurs in my mac m chip with podman v5
Completely fresh installation of Podman and Supabase CLI, everything runs except for the Health checks.
Starting with --ignore-health-check allows for me to use everything. Two containers remain unhealthy, namely:
supabase_pg_meta_supaflags container is not ready: unhealthy supabase_studio_supaflags container is not ready: unhealthy
Studio logs:
β² Next.js 14.2.3
- Local: http://localhost:3000
- Network: http://0.0.0.0:3000
β Starting...
β Ready in 330ms
pg logs:
(node:1) ExperimentalWarning: Importing JSON modules is an experimental feature and might change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
{"level":"info","time":"2024-08-21T12:43:05.758Z","pid":1,"hostname":"11787cc5ed12","msg":"Server listening at http://0.0.0.0:8080"}
{"level":"info","time":"2024-08-21T12:43:05.763Z","pid":1,"hostname":"11787cc5ed12","msg":"Server listening at http://0.0.0.0:8081"}
I didn't the network issue or hosts issue. but I got the error about mount docker sock.
what I have:
export DOCKER_HOST=unix:///run/user/$(id -u)/podman/podman.sock
supabase start
What I got:
WARNING: analytics requires mounting default docker socket: /var/run/docker.sock
Stopping containers...
failed to create docker container: Error response from daemon: make cli opts(): making volume mountpoint for volume /var/run/docker.sock: mkdir /var/run/docker.sock: permission denied
Try rerunning the command with --debug to troubleshoot the error.
the supabase start indeed pulled the docker images and run a few containers. so the DOCKER_HOST and podman are working.
I resolved the issue by removing /var/run/docker.sock
and creating a new symbolic link to the socket at /run/user/$UID/podman/podman.sock
. However, I consider this an hack, so I'll wait for better solutions and suggestions. Thank you.
sudo rm /var/run/docker.sock
ln -s /run/user/$(id -u)/podman/podman.sock docker.sock
supabase start --ignore-health-check
@sweatybridge please re-check again this issue
Feature request
Would be really nice to be able to use Podman instead of Docker. Podman is more lightweight, and from my understanding can run containers rootless