containrrr / watchtower

A process for automating Docker container base image updates.
https://containrrr.dev/watchtower/
Apache License 2.0
19.33k stars 854 forks source link

Log Spam via Email due to rate limit? #861

Closed r2ixuz closed 3 years ago

r2ixuz commented 3 years ago

Describe the bug I am a private user who updates his 9 Docker containers (all from hub.docker.com registry) through Watchtower every saturday (so only once a week). Since several month I see a lot of this message in the report email:

Could not do a head request, falling back to regular pull.

As I understand it correctly, this should already be fixed via issue as described here: #715. So I am using already the latest version. Why I still get this message spam? Is this due to the rate limit which hub.docker.com introduced a while ago?

This "HEAD" warn message appears instantly for the first container which Watchtower tries to update.

To Reproduce Steps to reproduce the behavior: Let Watchtower update all containers weekly.

Expected behavior I should only see in the report email, which containers were updated as it was some month ago.

Screenshots

Environment

Logs from running watchtower with the --debug option ``` ```

Additional context Add any other context about the problem here.

simskij commented 3 years ago

Hi,

We need to see the command (or compose file) you used to launch watchtower, as well as any logs you have that might help bring clarity to the issue.

Best, Simme

r2ixuz commented 3 years ago

So here is my docker-compose file:

version: '2'
services:
  watchtower:
    image: containrrr/watchtower:latest
    container_name: watchtower
    environment:
      - WATCHTOWER_NOTIFICATIONS=email
      - WATCHTOWER_NOTIFICATION_EMAIL_FROM=xyz@domain.com
      - WATCHTOWER_NOTIFICATION_EMAIL_TO=xyz@domain.com
      - WATCHTOWER_NOTIFICATION_EMAIL_SERVER=192.168.1.254
      - WATCHTOWER_INCLUDE_STOPPED=true
      - WATCHTOWER_MONITOR_ONLY=false
      - WATCHTOWER_CLEANUP=true
      - WATCHTOWER_SCHEDULE=0 0 6 * * 6
      - TZ=Europe/Berlin
    hostname: watchtower
    restart: unless-stopped
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

I just tried the new version 1.2.0 but the container is only restarting with:

standard_init_linux.go:211: exec user process caused "no such file or directory"
simskij commented 3 years ago

Thank you for the swift report! I've just released v1.2.1 which should fix this issue. Please let me know whether it resolves it for you.

DannyBoyKN commented 3 years ago

I still do get the message "Could not do a head request, falling back to regular pull.", last yesterday evening 2021-04-01 without any container updates. I restart watchtower this morning again, but the log only reports:

time="2021-04-02T07:57:14Z" level=info msg="Watchtower v0.0.0-unknown\nUsing notifications: pushover\nChecking all containers (except explicitly disabled with label)\nScheduling first run: 2021-04-02 19:57:14 +0000 UTC\nNote that the first check will be performed in 11 hours, 59 minutes, 59 seconds"

Why v0.0.0-unknown ? How do I know that version 1.2.1 is effectively being executed ?

This is my compose file:

version: "3.6"
services:
  watchtower:
    image: containrrr/watchtower:armhf-latest
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - WATCHTOWER_NOTIFICATIONS=shoutrrr
      - "WATCHTOWER_NOTIFICATION_URL=pushover://shoutrrr:<devicetag>@<usertag>/?devices=<device>"
      - "WATCHTOWER_NOTIFICATION_TEMPLATE={{range .}}{{.Time.Format \"2006-01-02 15:04:05\"}} ({{.Level}}): {{.Message}}{{println}}{{end}}"
    command: --interval 43200 --cleanup
    restart: unless-stopped
piksel commented 3 years ago

@DannyBoyKN A feature to display the version in the startup message was merged, but the release scripts were never updated to populate it. The only version that is released that displays v0.0.0-unknown is v1.2.1 though. Sorry for the confusion.

As for the "fix" mentioned above, it was referring to the broken v1.2.0 that @r2ixuz experienced, not the messages about the fallback. That could be because of a number of reasons, so we need to look at the logs, preferably with --debug enabled to investigate why the failures occurs. And it's more likely than not that your issue is not the same as the OP, so please create a separate ticket for it. Thanks!

DannyBoyKN commented 3 years ago

Moved "my" concerns to new issue #880

r2ixuz commented 3 years ago

So here is the log from today's run:

time="2021-04-03T06:00:00+02:00" level=warning msg="Could not do a head request, falling back to regular pull." container=/unbound image="mvance/unbound:latest"
time="2021-04-03T06:00:01+02:00" level=warning msg="Could not do a head request, falling back to regular pull." container=/watchtower image="containrrr/watchtower:latest"
time="2021-04-03T06:00:03+02:00" level=warning msg="Could not do a head request, falling back to regular pull." container=/wtdummy-stubby image="debian:stable-slim"
time="2021-04-03T06:00:07+02:00" level=info msg="Found new debian:stable-slim image (sha256:f052c451335940bea31ab0e58feff9fc657959c49150be8ed3d4dd592790bb3b)"
time="2021-04-03T06:00:07+02:00" level=warning msg="Could not do a head request, falling back to regular pull." container=/pyload-data image="linuxserver/pyload:latest"
time="2021-04-03T06:00:15+02:00" level=info msg="Found new linuxserver/pyload:latest image (sha256:f6c9ad449d48a419db88b03b9a51d2e3a3db1b22de9f79cad6a01159b56d62bd)"
time="2021-04-03T06:00:15+02:00" level=warning msg="Could not do a head request, falling back to regular pull." container=/pyload-storage image="linuxserver/pyload:latest"
time="2021-04-03T06:00:17+02:00" level=info msg="Found new linuxserver/pyload:latest image (sha256:f6c9ad449d48a419db88b03b9a51d2e3a3db1b22de9f79cad6a01159b56d62bd)"
time="2021-04-03T06:00:17+02:00" level=warning msg="Could not do a head request, falling back to regular pull." container=/iobroker image="buanet/iobroker:latest"
time="2021-04-03T06:00:38+02:00" level=info msg="Found new buanet/iobroker:latest image (sha256:32a33e2ad88bf600a630e3f949e2323921dc0dc1188e1058b1d7c7588e2345d7)"
time="2021-04-03T06:00:38+02:00" level=warning msg="Could not do a head request, falling back to regular pull." container=/deconz image="marthoc/deconz:latest"
time="2021-04-03T06:00:40+02:00" level=warning msg="Could not do a head request, falling back to regular pull." container=/homebridge image="oznu/homebridge:no-avahi"
time="2021-04-03T06:00:41+02:00" level=warning msg="Could not do a head request, falling back to regular pull." container=/smokeping image="linuxserver/smokeping:latest"
time="2021-04-03T06:00:48+02:00" level=info msg="Found new linuxserver/smokeping:latest image (sha256:ddf3969d6e611bda503e592f208b7973fd4d23ccd94a324d73ca8fd69f801558)"
time="2021-04-03T06:00:48+02:00" level=warning msg="Could not do a head request, falling back to regular pull." container=/wtdummy-zpush image="ubuntu:18.04"
time="2021-04-03T06:00:50+02:00" level=warning msg="Could not do a head request, falling back to regular pull." container=/wtdummy-nginxsslproxy image="nginx:latest"
time="2021-04-03T06:00:55+02:00" level=info msg="Found new nginx:latest image (sha256:7ce4f91ef623b9672ec12302c4a710629cd542617c1ebc616a48d06e2a84656a)"
time="2021-04-03T06:00:55+02:00" level=warning msg="Could not do a head request, falling back to regular pull." container=/pihole image="pihole/pihole:latest"
time="2021-04-03T06:00:57+02:00" level=warning msg="Could not do a head request, falling back to regular pull." container=/portainer image="portainer/portainer-ce:latest"

I forgot to mention that I am using hub.docker.com registry without any credentials. Just as anonymous user...

piksel commented 3 years ago

I don't see anything obvious wrong in the normal log, but the reason for the failed head requests is displayed if you pass --debug.

r2ixuz commented 3 years ago

Watchtower works as expected, so I think you are right: This is normal behaviour. But when it is normal, why is this considered as "warning" level. IMHO it should be "debug" level.

piksel commented 3 years ago

It's a warning because it's consuming your limited pull rate from docker hub. For other container registries it's just a debug message, but since docker hub actually have rate limiting it could be decremental to your setup (if you need your rate "slots" for normal usage).

r2ixuz commented 3 years ago

Hm, I get it. But I still think it should somehow be configurable to get a report mail for updated containers without this warnings.

I would guess private users never hit the rate limit while updating and business users should be already aware of this "new" docker hub behaviour.

piksel commented 3 years ago

Yeah, but you could also see it as "it's a warning because this should work but, for some reason it doesn't". Adding the option to just disabling the warning has been unfavored to actually fixing the underlying issues.

I'm not sure what you mean regarding the last part, but the reason for the message is that Watchtower tries to do an update check without using one of the rate-limited pulls, which normally doesn't yield a warning. It's only because some goes wrong and the check fails that this warning is displayed.

simskij commented 3 years ago

An option to suppress these warnings will be included in v1.3.0, which hopefully will be released in the coming days.

ghost commented 3 years ago

To avoid important communication to get lost in a closed issues no one monitors, I'll go ahead and lock this issue. If you want to continue the discussion, please open a new issue. Thank you! 🙏🏼