Closed edolix closed 1 week ago
Could it be around these lines:
the error message from v1.7 is context "kamal-my-app-native-remote-amd64" does not exist
while in v1.6.0 was ERROR: no builder "kamal-smart-track-native-remote" found
.
Hmm interesting, when I run docker context inspect kamal-my-app-native-remote-amd64 --format '{{.Endpoints.docker.Host}}'
which doesn't exist, I get this error message:
context "kamal-my-app-native-remote-amd64": context not found: open <snip>/.docker/contexts/meta/ad3cdf99c2f765ec10c20f6e8d60aac5f39e063f514574d5863c922f20ec6216/meta.json: no such file or directory
So I'd be interested to know why you are getting a different error message. In any case let's update the matcher to include does not exist
.
Looks like the does not exist
error message is coming from the cli.remove
call.
It will run docker context rm kamal-app-native-remote-amd64; docker buildx rm kamal-app-native-remote
where docker context rm
returns exactly context "kamal-app-native-remote-amd64" does not exist
.
I don't understand why this WARN line before cli.remove
didn't show up in the logs tho. Is the output overridden by the docker error message?
@edolix - the stacktrace is from line 38 of build.rb, so it looks like it is from the docker inspect command. I've released v1.7.2 with a fix for this - could you confirm if that's worked?
@djmb using v1.7.2
it works!
.....
INFO [2d3d5f41] Running docker context inspect kamal-my-app-native-remote-amd64 --format '{{.Endpoints.docker.Host}}' on localhost
DEBUG [2d3d5f41] Command: docker context inspect kamal-my-app-native-remote-amd64 --format '{{.Endpoints.docker.Host}}'
DEBUG [2d3d5f41]
DEBUG [2d3d5f41] context "kamal-my-app-native-remote-amd64" does not exist
WARN Missing compatible builder, so creating a new one first
DEBUG Using builder: native/remote
INFO [823f5d0a] Running docker context create kamal-my-app-native-remote-amd64 --description 'kamal-my-app-native-remote amd64 native host' --docker 'host=' ; docker buildx create --name kamal-my-app-native-remote kamal-my-app-native-remote-amd64 --platform linux/amd64 on localhost
DEBUG [823f5d0a] Command: docker context create kamal-my-app-native-remote-amd64 --description 'kamal-my-app-native-remote amd64 native host' --docker 'host=' ; docker buildx create --name kamal-my-app-native-remote kamal-my-app-native-remote-amd64 --platform linux/amd64
DEBUG [823f5d0a] kamal-my-app-native-remote-amd64
DEBUG [823f5d0a] Successfully created context "kamal-my-app-native-remote-amd64"
DEBUG [823f5d0a] kamal-my-app-native-remote
...
You're right about the stacktrace line but somehow i can't reproduce the error message w/ the docker inspect command locally:
Running "inspect"
docker context inspect kamal-foo-bar-not-exist --format '{{.Endpoints.docker.Host}}'
context "kamal-foo-bar-not-exist": context not found: open
> Running "rm"
docker context rm kamal-foo-bar-not-exist
context "kamal-foo-bar-not-exist" does not exist
The problem is solved but i'm curious about the error message, i'll dig a little bit more. Thanks for the fix and help! 🙏
I had the same issue, but didn't investigate further. I thought it might have to do something with the multiarch build (as this made it work again) and my newly setup server with Ubuntu 22.04 instead of doing builds on my M2.
Hi, thanks for this great package!
Today i tried to use the
latest
Kamal image and i got the error below. Looks like it's not able to inspect / create the docker context.Logs from
kamal deploy -d production --verbose
usingghcr.io/basecamp/kamal:v1.7.0
but it happens with bothv1.7.0
andv1.7.1
.The same command using
v1.6.0
works.Logs from
kamal deploy -d production --verbose
usingghcr.io/basecamp/kamal:v1.6.0
.Context
I'm using the workaround described in https://github.com/basecamp/kamal/issues/809 so there's a dummy
config/deploy.yml
and a realconfig/deploy.production.yml
with these settings:deploy.production.yml
```yml service: my-app image: edolix/my-app servers: web: hosts: - my-app.egallo.dev labels: traefik.http.services.my-app-web-production.loadbalancer.server.port: "8000" traefik.docker.network: private traefik.http.routers.smart_track.rule: Host(`my-app.egallo.dev`) traefik.http.routers.smart_track.entrypoints: websecure traefik.http.routers.smart_track.tls.certresolver: letsencrypt traefik.http.routers.smart_track_secure.entrypoints: websecure traefik.http.routers.smart_track_secure.rule: Host(`my-app.egallo.dev`) traefik.http.routers.smart_track_secure.tls: true traefik.http.routers.smart_track_secure.tls.certresolver: letsencrypt options: "add-host": host.docker.internal:host-gateway network: "private" registry: server: ghcr.io username: - KAMAL_REGISTRY_USERNAME password: - KAMAL_REGISTRY_PASSWORD # Inject ENV variables into containers (secrets come from .env). # Remember to run `kamal env push` after making changes! env: clear: HOSTNAME: my-app.egallo.dev secret: - removed_for_brevity # Use a different ssh user than root ssh: user: ubuntu builder: remote: arch: amd64 healthcheck: path: /up port: 8000 accessories: db: image: postgres:16.0 roles: - web env: secret: - POSTGRES_PASSWORD directories: - data:/var/lib/postgresql/data options: network: "private" traefik: options: publish: - "443:443" volume: - "/letsencrypt/acme.json:/letsencrypt/acme.json" network: "private" args: accesslog: true accesslog.format: json log: true log.level: DEBUG entryPoints.web.address: ":80" entryPoints.websecure.address: ":443" entryPoints.web.http.redirections.entryPoint.to: websecure entryPoints.web.http.redirections.entryPoint.scheme: https entryPoints.web.http.redirections.entrypoint.permanent: true entrypoints.websecure.http.tls: true entrypoints.websecure.http.tls.domains[0].main: "my-app.egallo.dev" certificatesResolvers.letsencrypt.acme.email: "edo91.gallo@gmail.com" certificatesResolvers.letsencrypt.acme.storage: "/letsencrypt/acme.json" certificatesResolvers.letsencrypt.acme.httpchallenge: true certificatesResolvers.letsencrypt.acme.httpchallenge.entrypoint: web ```Docker version
Still digging through trying to understand where could be the issue but the ticket might be helpful for others.
Thanks again!