containrrr / watchtower

A process for automating Docker container base image updates.
https://containrrr.dev/watchtower/
Apache License 2.0
19.45k stars 857 forks source link

[Feature Request] Support for Podman #1060

Open R8s6 opened 3 years ago

R8s6 commented 3 years ago

Is your feature request related to a problem? Please describe. Currently watchtower requires /var/run/docker.sock, which is not present in a system with only podman installed (i.e. where docker is not installed).

Describe the solution you'd like Potentially supporting podman in the future.

Thanks!

simskij commented 3 years ago

Looks like podman has a docker-compatible REST api, so should be working out of the box as far as I can tell. I haven't tried it myself though: https://podman.io/blogs/2020/07/01/rest-versioning.html

piksel commented 3 years ago

if podman service is running, this should work:

podman run -v /var/run/podman/podman.sock:/var/run/docker.sock docker.io/containrrr/watchtower 
piksel commented 3 years ago

Update; it sort of worked when I tried it:

2021-08-31T18:00:53Z [D] Doing a HEAD request to fetch a digest
                         url: https://index.docker.io/v2/containrrr/watchtower/manifests/latest
2021-08-31T18:00:53Z [D] Found a remote digest to compare with
                         remote: sha256:3283e0b5be326d77ff4f4e8b7a91d46aaa1d511c74877b5a32f161548812d00c
2021-08-31T18:00:53Z [D] Comparing
                         local: sha256:3283e0b5be326d77ff4f4e8b7a91d46aaa1d511c74877b5a32f161548812d00c
                         remote: sha256:3283e0b5be326d77ff4f4e8b7a91d46aaa1d511c74877b5a32f161548812d00c
2021-08-31T18:00:53Z [D] Found a match
2021-08-31T18:00:53Z [D] No pull needed. Skipping image.
2021-08-31T18:00:53Z [I] Found new docker.io/containrrr/watchtower:latest image (9167b324e914)
2021-08-31T18:00:53Z [D] This is the watchtower container /sweet_greider
2021-08-31T18:00:53Z [D] Renaming container /sweet_greider (4c4ec40ff1a6) to PPctWJctHvcNrpXpLaCSQaSdAmRtxEdN
2021-08-31T18:00:53Z [I] Creating /sweet_greider
2021-08-31T18:00:53Z [E] Error response from daemon: fill out specgen: ulimit option "RLIMIT_NOFILE=1048576:1048576" requires name=SOFT:HARD, failed to be parsed: invalid ulimit type: RLIMIT_NOFILE
2021-08-31T18:00:53Z [D] Session done: 1 scanned, 0 updated, 1 failed

Not really sure what is going on here. It checks the watchtower image and concludes that No pull needed. Skipping image., but then goes ahead and updates it anyway? Then it fails when trying to rename itself. It might also just be that my podman installation is broken... perhaps someone with a known working setup can try it and share their experience?

piksel commented 3 years ago

The cause of the error above is: https://github.com/containers/podman/issues/9803 (Finding 4) Additionally, the "name" is not accepted by podman, since it has a leading slash:

INFO[0003] Creating /nginx-test
ERRO[0003] Error response from daemon: container create: error running container create option: names must match [a-zA-Z0-9][a-zA-Z0-9_.-]*: invalid argument

By patching these two in the new config I were able to successfully recreate a podman container:

// pkg/container/client.go:214
hostConfig.Ulimits = nil
name = name[1:]
Log output from successful run ``` nils@xiwangmu:~/src/watchtower $ go build && sudo ./watchtower --trace --run-once --host unix:///var/run/podman/podman.sock DEBU[0000] DEBU[0000] Sleeping for a second to ensure the docker api client has been properly initialized. DEBU[0001] Making sure everything is sane before starting INFO[0001] Watchtower v0.0.0-unknown Using no notifications Checking all containers (except explicitly disabled with label) Running a one time update. WARN[0001] trace level enabled: log will include sensitive information as credentials and tokens DEBU[0001] Checking containers for updated images DEBU[0001] Retrieving running containers DEBU[0001] Trying to load authentication credentials. container=/nginx-test image="docker.io/library/nginx:latest" DEBU[0001] No credentials for docker.io found config_file=/config.json DEBU[0001] Got image name: docker.io/library/nginx:latest DEBU[0001] Checking if pull is needed container=/nginx-test image="docker.io/library/nginx:latest" DEBU[0001] Building challenge URL URL="https://index.docker.io/v2/" DEBU[0001] Got response to challenge request header="Bearer realm=\"https://auth.docker.io/token\",service=\"registry.docker.io\"" status="401 Unauthorized" DEBU[0001] Checking challenge header content realm="https://auth.docker.io/token" service=registry.docker.io DEBU[0001] Setting scope for auth token image=docker.io/library/nginx scope="repository:library/nginx:pull" DEBU[0001] No credentials found. DEBU[0002] Parsing image ref host=index.docker.io image=docker.io/library/nginx normalized="docker.io/library/nginx:latest" tag=latest TRAC[0002] Setting request token DEBU[0002] Doing a HEAD request to fetch a digest url="https://index.docker.io/v2/library/nginx/manifests/latest" DEBU[0002] Found a remote digest to compare with remote="sha256:4d4d96ac750af48c6a551d757c1cbfc071692309b491b70b2b8976e102dd3fef" DEBU[0002] Comparing local="sha256:4d4d96ac750af48c6a551d757c1cbfc071692309b491b70b2b8976e102dd3fef" remote="sha256:4d4d96ac750af48c6a551d757c1cbfc071692309b491b70b2b8976e102dd3fef" DEBU[0002] Found a match DEBU[0002] No pull needed. Skipping image. INFO[0002] Found new docker.io/library/nginx:latest image (a25dfb1cd178) INFO[0002] Stopping /nginx-test (5008385716af) with SIGTERM DEBU[0011] Removing container 5008385716af INFO[0011] Creating nginx-test DEBU[0011] Starting container /nginx-test (4a74f0351cfb) DEBU[0011] Session done: 1 scanned, 0 updated, 0 failed Waiting for the notification goroutine to finish ```

This still leaves the issue of it always trying to update the containers though:

DEBU[0002] No pull needed. Skipping image.
INFO[0002] Found new docker.io/library/nginx:latest image (a25dfb1cd178)
piksel commented 3 years ago

Found the cause of the image always being treated as "stale":

nils@xiwangmu:~ $ sudo curl -s --unix-socket /run/podman/podman.sock 'http://d/v3.0.0/containers/nginx-test/json' | jq -C '.Image'
"docker.io/library/nginx:latest"

Podman doesn't return the image ID in the container inspect result, giving the image "name" instead. This could be solved by using the podman-specific API endpoint:

sudo curl -s --unix-socket /run/podman/podman.sock 'http://d/v3.0.0/libpod/containers/nginx-test/json' | jq -C .Image
"a25dfb1cd178de4f942ab5a87d3d999e3f981bb9f36fc6ee38b04669e14c32d2"

So, overall, this could be implemented, but would require a special flag for "podman mode".

ghost commented 3 years ago

+1 I'm considering transitioning future systems to use Podman and would like to see these edge cases worked out.

simskij commented 3 years ago

+1 I'm considering transitioning future systems to use Podman and would like to see these edge cases worked out.

feel free to join in on the efforts 👍🏽

stale[bot] commented 2 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

dbrgn commented 2 years ago

Note that podman has something like watchtower built-in: https://docs.podman.io/en/latest/markdown/podman-auto-update.1.html

R8s6 commented 2 years ago

Note that podman has something like watchtower built-in: https://docs.podman.io/en/latest/markdown/podman-auto-update.1.html

I see one needs to create a systemd unit for the container needed to be auto-updated. If I were to have all my containers auto-updated, does it mean I need to create a systemd unit file for each one (instead of allowing the systemd unit to auto-update all containers, as well as those to be installed in the future)? Thanks

d-513 commented 1 year ago

@piksel it now returns the image id, so this could be done. ULimits still an issue.

MartinX3 commented 1 year ago

@R8s6 please read the linked docs. Podman generates the systemd files for you.

d-513 commented 1 year ago

I made a PR, but still facing this issue:

cannot set memory swappiness with cgroupv2: OCI runtime error

Seems that podman does NOT like explicitly specifying everything, even if it was defaulted to that value.

DocMAX commented 1 year ago

I made a PR, but still facing this issue:

cannot set memory swappiness with cgroupv2: OCI runtime error

Seems that podman does NOT like explicitly specifying everything, even if it was defaulted to that value.

I have this error on multiple containers now. How can i fix what watchtower messed up?

d-513 commented 1 year ago

I made a PR, but still facing this issue:

cannot set memory swappiness with cgroupv2: OCI runtime error

Seems that podman does NOT like explicitly specifying everything, even if it was defaulted to that value.

I have this error on multiple containers now. How can i fix what watchtower messed up?

remove the containers and create them again

xtexChooser commented 1 year ago

The same error.

xpahos commented 1 year ago

The same error.

Try to build podman with this commit: https://github.com/containers/podman/commit/6ea703b79880c7f5119fe5355074f8e971df6626

CONTAINER ID  IMAGE                                   COMMAND               CREATED       STATUS       PORTS       NAMES
2e9e8b635337  quay.io/outline/shadowbox:stable        /bin/sh -c /cmd.s...  39 hours ago  Up 39 hours              shadowbox
58d75a814a6a  docker.io/containrrr/watchtower:latest  --cleanup --label...  39 hours ago  Up 39 hours              watchtower
xtexChooser commented 1 year ago

thanks!

xtexChooser commented 1 year ago

one container could not be started after re-creation

time="2023-05-18T18:10:24Z" level=info msg="Found new docker.io/binwiederhier/ntfy:latest image (2434c49f7c33)"

time="2023-05-18T18:10:26Z" level=info msg="Stopping /ntfy (09b65f2ca3c5) with SIGTERM"

time="2023-05-18T18:10:27Z" level=info msg="Creating /ntfy"

time="2023-05-18T18:10:27Z" level=error msg="Error response from daemon: crun: cannot set memory swappiness with cgroupv2: OCI runtime error"

time="2023-05-18T18:10:27Z" level=info msg="Session done" Failed=1 Scanned=9 Updated=0 notify=no

time="2023-05-18T18:10:27Z" level=error msg="Failed to send shoutrrr notification" error="failed to send ntfy notification: got HTTP 502 Bad Gateway" index=0 notify=no service=ntfy
MastaG commented 1 year ago

Same here.. it re-creates the container but it fails to start...

n-hass commented 1 year ago

I made a PR, but still facing this issue:


cannot set memory swappiness with cgroupv2: OCI runtime error

Seems that podman does NOT like explicitly specifying everything, even if it was defaulted to that value.

Confirming issue is still present in podman release v4.6.0

loeffelpan commented 9 months ago

Still present in 4.8.x. I'm using Nextcloud-AIO with podman rootless. For mastercontainer update it uses watchtower which works fine for docker but not for podman. Getting the swapiness-error on container start after watchtower created the new one.

Does watchtower use something similar like executing podman container clone ... via CLI? I found this one related to the clone command. https://github.com/containers/podman/issues/13916

Unfortunately it is unsolved. Maybe someone can raise a new issue to solve this in podman?

lazyzyf commented 9 months ago

I have the same issue.

chisaato commented 2 months ago

I made a PR, but still facing this issue:


cannot set memory swappiness with cgroupv2: OCI runtime error

Seems that podman does NOT like explicitly specifying everything, even if it was defaulted to that value.

Confirming issue is still present in podman release v4.6.0

Same on podman 5.2.2

image

I think this maybe watchtower set memory swappiness to 0 for containers. I do some research abount the low-level runtime of podman used by default, which is crun. See here

https://github.com/containers/crun/blob/f54d383e0012a698262ca7b93beb971f99b1f238/src/libcrun/cgroup-resources.c#L878

  if (memory->swappiness_present)
    {
      if (cgroup2)
        return crun_make_error (err, 0, "cannot set memory swappiness with cgroupv2");

      len = sprintf (fmt_buf, "%" PRIu64, memory->swappiness);
      ret = write_cgroup_file (dirfd, "memory.swappiness", fmt_buf, len, err);
      if (UNLIKELY (ret < 0))
        return ret;
    }

I use execsnoop from bcc-tools to check what crun does when restaring containers.

image

Cool,let's dig deeper to this userdata

image

then I found that containers has set memory.swappiness in config.json, you can find the file using

podman inspect <container>| grep OCIConfigPath

then compare the file with some containers created by podman

image

But who gives the 0 value? I inspect a container in a server running with docker, we will see

image

How about using podman?

image

well, podman gives zero even this value is not set. Then watchtower copy this and send container info to docker socket(emulated by podman) with zero setted in swappiness and BOOOM! , crun rejects it on CGroupV2.

tuxillo commented 4 weeks ago

What's missing here to get the same functionality as for docker?

chisaato commented 3 weeks ago

What's missing here to get the same functionality as for docker?

podman needs to return same value as docker api. and podman says it maybe fix in v6

lazyzyf commented 3 weeks ago

What's missing here to get the same functionality as for docker?

podman needs to return same value as docker api. and podman says it maybe fix in v6

so this feature is gated by podman, rather than watchtower, right?

warlordattack commented 5 days ago

This compose file works with Rocky linux (proxmox VM) / Podman :

  1. disable SELinux in Rocky linux
  2. in the compose file do not use : user: ${PUID}:${PGID}
  3. add : security_opt:
    • label=disable

docker.sock is automatically translated to podman.sock by podman, so this mapping works in my Podman :

---
services:
  srv_watchtower:
    container_name: ${C_WT}
    hostname: ${C_WT_HOST}
    image: ${C_WT_IMG}
    restart: ${C_ALL_RESTART}
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:rw # check 1. 2. 3. !!!
    # 1. - user: ${PUID}:${PGID} # UID/GID disabled
    security_opt: # 2. SELinux disabled
      - label=disable # 3. - label=disable added
    environment:
      - TZ=${TZ}
      #- WATCHTOWER_RUN_ONCE=true
      - WATCHTOWER_POLL_INTERVAL=86400 # 1h=3600 # 24h=86400
      - WATCHTOWER_MONITOR_ONLY=true
      - WATCHTOWER_CLEANUP=true
      #- WATCHTOWER_ROLLING_RESTART=true # restart containers one by one
      - WATCHTOWER_REVIVE_STOPPED=false
      - WATCHTOWER_INCLUDE_STOPPED=true
      #- WATCHTOWER_DEBUG=true      
      #- WATCHTOWER_LABEL_ENABLE=${WATCHTOWER_LABEL_ENABLE}
      - WATCHTOWER_NOTIFICATIONS=gotify
      - WATCHTOWER_NOTIFICATION_GOTIFY_URL=http://${C_GOT_HOST}:${C_GOT_80}/
      - WATCHTOWER_NOTIFICATION_GOTIFY_TOKEN=${C_WT_GOTIFY}
    labels:
      - "com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}"
    ports:
      - ${C_WT_80}:80
      - ${C_WT_443}:443
    networks:
      web:
        ipv4_address: ${C_WT_IP_WEB}
networks: 
  web: 
    external: true

image

Best regards :)

Had this output after run (don't know if update errors are due to podman) :

Unable to update container "/r-koodo": Error response from daemon: {"message":"x509: certificate is valid for 6b43d87c96397f68a135c01c1f292377.9058f1fa15a681094bc241d87108e81b.traefik.default, not localhost"}. Proceeding to next.
Could not do a head request for "docker.io/matrixdotorg/synapse:latest", falling back to regular pull.
Reason: Get "https://index.docker.io/v2/": dial tcp: lookup index.docker.io on 192.168.10.1:53: read udp 192.168.10.86:36764->192.168.10.1:53: i/o timeout
Unable to update container "/net-perplexica-b": Error response from daemon: {"message":"x509: certificate is valid for 6b43d87c96397f68a135c01c1f292377.9058f1fa15a681094bc241d87108e81b.traefik.default, not localhost"}. Proceeding to next.
Unable to update container "/net-perplexica-f": Error response from daemon: {"message":"x509: certificate is valid for 6b43d87c96397f68a135c01c1f292377.9058f1fa15a681094bc241d87108e81b.traefik.default, not localhost"}. Proceeding to next.
Found new docker.io/rustdesk/rustdesk-server:latest image (e0892e67d5a7)
Could not do a head request for "docker.io/tensorchord/pgvecto-rs@sha256:90724186f0a3517cf6914295b5ab410db9ce23190a2d9d0b9dd6463e3fa298f0", falling back to regular pull.
Reason: Parsed container image ref has no tag: docker.io/tensorchord/pgvecto-rs@sha256:90724186f0a3517cf6914295b5ab410db9ce23190a2d9d0b9dd6463e3fa298f0
Found new ghcr.io/immich-app/immich-server:release image (fd58d779b0a6)
Found new docker.io/tiredofit/traefik-cloudflare-companion:latest image (e620103b4344)
Found new docker.io/rustdesk/rustdesk-server:latest image (e0892e67d5a7)
Found new ghcr.io/immich-app/immich-server:release image (fd58d779b0a6)

And this is my portainer compose file witch also uses docker.sock with podman/rocky and works :

---
services:
  srv_portainer:
    container_name: ${C_PTN}
    hostname: ${C_PTN_HOST}
    image: ${C_PTN_IMG}
    restart: ${C_ALL_RESTART}
    ports:
      - ${C_PTN_9000}:9000
      - ${C_PTN_8000}:8000
      - ${C_PTN_9443}:9443
    volumes:
      - ${REP_SSL}:/certs
      - ${REP_APPDATA}/${C_PTN}/data:/data:rw
      - ${REP_APPDATA}/${C_PTN}/config:/config:rw
      - /etc/localtime:/etc/localtime:ro
      - /var/run/docker.sock:/var/run/docker.sock:rw # check 1. 2. 3. !!!
    # 1. - user: ${PUID}:${PGID} # UID/GID disabled
    security_opt: # 2. SELinux disabled
      - label=disable # 3. - label=disable added
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - UMASK=${UMASK}
      - TZ=${TZ}
      - AGENT_SECRET=${C_PTN_AGENT_SECRET}
    labels:
      - "com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}"
    networks:
      web:
        ipv4_address: ${C_PTN_IP_WEB}
networks: 
  web: 
    external: true
lazyzyf commented 3 days ago

how come you map the volume to /var/run/docker.sock? mine is /run/user/1000/podman/podman.sock.

warlordattack commented 3 days ago

The mapping (full docker compose in my other post) : volumes:

you need to :

  1. disable SELINUX in Rocky Linux (find tutorial in google to do this)
  2. remove "user:" in your docker compose
  3. add security_opt: -label=disable in your docker compose

Podaman translates docker directives to podman directives automaticly (you do not need use podman.sock, use docker.sock instead). Try my docker compose as presented in other post.

image