Open R8s6 opened 3 years ago
Looks like podman has a docker-compatible REST api, so should be working out of the box as far as I can tell. I haven't tried it myself though: https://podman.io/blogs/2020/07/01/rest-versioning.html
if podman service is running, this should work:
podman run -v /var/run/podman/podman.sock:/var/run/docker.sock docker.io/containrrr/watchtower
Update; it sort of worked when I tried it:
2021-08-31T18:00:53Z [D] Doing a HEAD request to fetch a digest
url: https://index.docker.io/v2/containrrr/watchtower/manifests/latest
2021-08-31T18:00:53Z [D] Found a remote digest to compare with
remote: sha256:3283e0b5be326d77ff4f4e8b7a91d46aaa1d511c74877b5a32f161548812d00c
2021-08-31T18:00:53Z [D] Comparing
local: sha256:3283e0b5be326d77ff4f4e8b7a91d46aaa1d511c74877b5a32f161548812d00c
remote: sha256:3283e0b5be326d77ff4f4e8b7a91d46aaa1d511c74877b5a32f161548812d00c
2021-08-31T18:00:53Z [D] Found a match
2021-08-31T18:00:53Z [D] No pull needed. Skipping image.
2021-08-31T18:00:53Z [I] Found new docker.io/containrrr/watchtower:latest image (9167b324e914)
2021-08-31T18:00:53Z [D] This is the watchtower container /sweet_greider
2021-08-31T18:00:53Z [D] Renaming container /sweet_greider (4c4ec40ff1a6) to PPctWJctHvcNrpXpLaCSQaSdAmRtxEdN
2021-08-31T18:00:53Z [I] Creating /sweet_greider
2021-08-31T18:00:53Z [E] Error response from daemon: fill out specgen: ulimit option "RLIMIT_NOFILE=1048576:1048576" requires name=SOFT:HARD, failed to be parsed: invalid ulimit type: RLIMIT_NOFILE
2021-08-31T18:00:53Z [D] Session done: 1 scanned, 0 updated, 1 failed
Not really sure what is going on here. It checks the watchtower image and concludes that No pull needed. Skipping image.
, but then goes ahead and updates it anyway?
Then it fails when trying to rename itself. It might also just be that my podman installation is broken... perhaps someone with a known working setup can try it and share their experience?
The cause of the error above is: https://github.com/containers/podman/issues/9803 (Finding 4) Additionally, the "name" is not accepted by podman, since it has a leading slash:
INFO[0003] Creating /nginx-test
ERRO[0003] Error response from daemon: container create: error running container create option: names must match [a-zA-Z0-9][a-zA-Z0-9_.-]*: invalid argument
By patching these two in the new config I were able to successfully recreate a podman container:
// pkg/container/client.go:214
hostConfig.Ulimits = nil
name = name[1:]
This still leaves the issue of it always trying to update the containers though:
DEBU[0002] No pull needed. Skipping image.
INFO[0002] Found new docker.io/library/nginx:latest image (a25dfb1cd178)
Found the cause of the image always being treated as "stale":
nils@xiwangmu:~ $ sudo curl -s --unix-socket /run/podman/podman.sock 'http://d/v3.0.0/containers/nginx-test/json' | jq -C '.Image'
"docker.io/library/nginx:latest"
Podman doesn't return the image ID in the container inspect result, giving the image "name" instead. This could be solved by using the podman-specific API endpoint:
sudo curl -s --unix-socket /run/podman/podman.sock 'http://d/v3.0.0/libpod/containers/nginx-test/json' | jq -C .Image
"a25dfb1cd178de4f942ab5a87d3d999e3f981bb9f36fc6ee38b04669e14c32d2"
So, overall, this could be implemented, but would require a special flag for "podman mode".
+1 I'm considering transitioning future systems to use Podman and would like to see these edge cases worked out.
+1 I'm considering transitioning future systems to use Podman and would like to see these edge cases worked out.
feel free to join in on the efforts 👍🏽
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Note that podman has something like watchtower built-in: https://docs.podman.io/en/latest/markdown/podman-auto-update.1.html
Note that podman has something like watchtower built-in: https://docs.podman.io/en/latest/markdown/podman-auto-update.1.html
I see one needs to create a systemd unit for the container needed to be auto-updated. If I were to have all my containers auto-updated, does it mean I need to create a systemd unit file for each one (instead of allowing the systemd unit to auto-update all containers, as well as those to be installed in the future)? Thanks
@piksel it now returns the image id, so this could be done. ULimits still an issue.
@R8s6 please read the linked docs. Podman generates the systemd files for you.
I made a PR, but still facing this issue:
cannot set memory swappiness with cgroupv2: OCI runtime error
Seems that podman does NOT like explicitly specifying everything, even if it was defaulted to that value.
I made a PR, but still facing this issue:
cannot set memory swappiness with cgroupv2: OCI runtime error
Seems that podman does NOT like explicitly specifying everything, even if it was defaulted to that value.
I have this error on multiple containers now. How can i fix what watchtower messed up?
I made a PR, but still facing this issue:
cannot set memory swappiness with cgroupv2: OCI runtime error
Seems that podman does NOT like explicitly specifying everything, even if it was defaulted to that value.
I have this error on multiple containers now. How can i fix what watchtower messed up?
remove the containers and create them again
The same error.
The same error.
Try to build podman with this commit: https://github.com/containers/podman/commit/6ea703b79880c7f5119fe5355074f8e971df6626
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2e9e8b635337 quay.io/outline/shadowbox:stable /bin/sh -c /cmd.s... 39 hours ago Up 39 hours shadowbox
58d75a814a6a docker.io/containrrr/watchtower:latest --cleanup --label... 39 hours ago Up 39 hours watchtower
thanks!
one container could not be started after re-creation
time="2023-05-18T18:10:24Z" level=info msg="Found new docker.io/binwiederhier/ntfy:latest image (2434c49f7c33)"
time="2023-05-18T18:10:26Z" level=info msg="Stopping /ntfy (09b65f2ca3c5) with SIGTERM"
time="2023-05-18T18:10:27Z" level=info msg="Creating /ntfy"
time="2023-05-18T18:10:27Z" level=error msg="Error response from daemon: crun: cannot set memory swappiness with cgroupv2: OCI runtime error"
time="2023-05-18T18:10:27Z" level=info msg="Session done" Failed=1 Scanned=9 Updated=0 notify=no
time="2023-05-18T18:10:27Z" level=error msg="Failed to send shoutrrr notification" error="failed to send ntfy notification: got HTTP 502 Bad Gateway" index=0 notify=no service=ntfy
Same here.. it re-creates the container but it fails to start...
I made a PR, but still facing this issue:
cannot set memory swappiness with cgroupv2: OCI runtime error
Seems that podman does NOT like explicitly specifying everything, even if it was defaulted to that value.
Confirming issue is still present in podman release v4.6.0
Still present in 4.8.x. I'm using Nextcloud-AIO with podman rootless. For mastercontainer update it uses watchtower which works fine for docker but not for podman. Getting the swapiness-error on container start after watchtower created the new one.
Does watchtower use something similar like executing podman container clone ...
via CLI?
I found this one related to the clone command.
https://github.com/containers/podman/issues/13916
Unfortunately it is unsolved. Maybe someone can raise a new issue to solve this in podman?
I have the same issue.
I made a PR, but still facing this issue:
cannot set memory swappiness with cgroupv2: OCI runtime error
Seems that podman does NOT like explicitly specifying everything, even if it was defaulted to that value.
Confirming issue is still present in podman release v4.6.0
Same on podman 5.2.2
I think this maybe watchtower set memory swappiness to 0 for containers. I do some research abount the low-level runtime of podman used by default, which is crun. See here
if (memory->swappiness_present)
{
if (cgroup2)
return crun_make_error (err, 0, "cannot set memory swappiness with cgroupv2");
len = sprintf (fmt_buf, "%" PRIu64, memory->swappiness);
ret = write_cgroup_file (dirfd, "memory.swappiness", fmt_buf, len, err);
if (UNLIKELY (ret < 0))
return ret;
}
I use execsnoop
from bcc-tools
to check what crun does when restaring containers.
Cool,let's dig deeper to this userdata
then I found that containers has set memory.swappiness in config.json
, you can find the file using
podman inspect <container>| grep OCIConfigPath
then compare the file with some containers created by podman
But who gives the 0 value? I inspect a container in a server running with docker, we will see
How about using podman?
well, podman gives zero even this value is not set. Then watchtower copy this and send container info to docker socket(emulated by podman) with zero setted in swappiness
and BOOOM! , crun rejects it on CGroupV2.
What's missing here to get the same functionality as for docker?
What's missing here to get the same functionality as for docker?
podman needs to return same value as docker api. and podman says it maybe fix in v6
What's missing here to get the same functionality as for docker?
podman needs to return same value as docker api. and podman says it maybe fix in v6
so this feature is gated by podman, rather than watchtower, right?
This compose file works with Rocky linux (proxmox VM) / Podman :
docker.sock is automatically translated to podman.sock by podman, so this mapping works in my Podman :
---
services:
srv_watchtower:
container_name: ${C_WT}
hostname: ${C_WT_HOST}
image: ${C_WT_IMG}
restart: ${C_ALL_RESTART}
volumes:
- /var/run/docker.sock:/var/run/docker.sock:rw # check 1. 2. 3. !!!
# 1. - user: ${PUID}:${PGID} # UID/GID disabled
security_opt: # 2. SELinux disabled
- label=disable # 3. - label=disable added
environment:
- TZ=${TZ}
#- WATCHTOWER_RUN_ONCE=true
- WATCHTOWER_POLL_INTERVAL=86400 # 1h=3600 # 24h=86400
- WATCHTOWER_MONITOR_ONLY=true
- WATCHTOWER_CLEANUP=true
#- WATCHTOWER_ROLLING_RESTART=true # restart containers one by one
- WATCHTOWER_REVIVE_STOPPED=false
- WATCHTOWER_INCLUDE_STOPPED=true
#- WATCHTOWER_DEBUG=true
#- WATCHTOWER_LABEL_ENABLE=${WATCHTOWER_LABEL_ENABLE}
- WATCHTOWER_NOTIFICATIONS=gotify
- WATCHTOWER_NOTIFICATION_GOTIFY_URL=http://${C_GOT_HOST}:${C_GOT_80}/
- WATCHTOWER_NOTIFICATION_GOTIFY_TOKEN=${C_WT_GOTIFY}
labels:
- "com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}"
ports:
- ${C_WT_80}:80
- ${C_WT_443}:443
networks:
web:
ipv4_address: ${C_WT_IP_WEB}
networks:
web:
external: true
Best regards :)
Had this output after run (don't know if update errors are due to podman) :
Unable to update container "/r-koodo": Error response from daemon: {"message":"x509: certificate is valid for 6b43d87c96397f68a135c01c1f292377.9058f1fa15a681094bc241d87108e81b.traefik.default, not localhost"}. Proceeding to next.
Could not do a head request for "docker.io/matrixdotorg/synapse:latest", falling back to regular pull.
Reason: Get "https://index.docker.io/v2/": dial tcp: lookup index.docker.io on 192.168.10.1:53: read udp 192.168.10.86:36764->192.168.10.1:53: i/o timeout
Unable to update container "/net-perplexica-b": Error response from daemon: {"message":"x509: certificate is valid for 6b43d87c96397f68a135c01c1f292377.9058f1fa15a681094bc241d87108e81b.traefik.default, not localhost"}. Proceeding to next.
Unable to update container "/net-perplexica-f": Error response from daemon: {"message":"x509: certificate is valid for 6b43d87c96397f68a135c01c1f292377.9058f1fa15a681094bc241d87108e81b.traefik.default, not localhost"}. Proceeding to next.
Found new docker.io/rustdesk/rustdesk-server:latest image (e0892e67d5a7)
Could not do a head request for "docker.io/tensorchord/pgvecto-rs@sha256:90724186f0a3517cf6914295b5ab410db9ce23190a2d9d0b9dd6463e3fa298f0", falling back to regular pull.
Reason: Parsed container image ref has no tag: docker.io/tensorchord/pgvecto-rs@sha256:90724186f0a3517cf6914295b5ab410db9ce23190a2d9d0b9dd6463e3fa298f0
Found new ghcr.io/immich-app/immich-server:release image (fd58d779b0a6)
Found new docker.io/tiredofit/traefik-cloudflare-companion:latest image (e620103b4344)
Found new docker.io/rustdesk/rustdesk-server:latest image (e0892e67d5a7)
Found new ghcr.io/immich-app/immich-server:release image (fd58d779b0a6)
And this is my portainer compose file witch also uses docker.sock with podman/rocky and works :
---
services:
srv_portainer:
container_name: ${C_PTN}
hostname: ${C_PTN_HOST}
image: ${C_PTN_IMG}
restart: ${C_ALL_RESTART}
ports:
- ${C_PTN_9000}:9000
- ${C_PTN_8000}:8000
- ${C_PTN_9443}:9443
volumes:
- ${REP_SSL}:/certs
- ${REP_APPDATA}/${C_PTN}/data:/data:rw
- ${REP_APPDATA}/${C_PTN}/config:/config:rw
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:rw # check 1. 2. 3. !!!
# 1. - user: ${PUID}:${PGID} # UID/GID disabled
security_opt: # 2. SELinux disabled
- label=disable # 3. - label=disable added
environment:
- PUID=${PUID}
- PGID=${PGID}
- UMASK=${UMASK}
- TZ=${TZ}
- AGENT_SECRET=${C_PTN_AGENT_SECRET}
labels:
- "com.centurylinklabs.watchtower.enable=${WATCHTOWER_LABEL_ENABLE}"
networks:
web:
ipv4_address: ${C_PTN_IP_WEB}
networks:
web:
external: true
how come you map the volume to /var/run/docker.sock? mine is /run/user/1000/podman/podman.sock.
The mapping (full docker compose in my other post) : volumes:
you need to :
Podaman translates docker directives to podman directives automaticly (you do not need use podman.sock, use docker.sock instead). Try my docker compose as presented in other post.
Is your feature request related to a problem? Please describe. Currently
watchtower
requires/var/run/docker.sock
, which is not present in a system with onlypodman
installed (i.e. wheredocker
is not installed).Describe the solution you'd like Potentially supporting
podman
in the future.Thanks!