Open ektorasdj opened 3 months ago
Please see #50
It seems to be fixed by upgrading to the latest version of docker, or possibly just restarting the docker service. sudo systemctl restart docker
if you're on Linux.
I'd love to get to the bottom of this and find the cause if it's fixable on this end. It would be helpful to know:
Hello, thank you for such a quick answer. Unfortunately i cannot update since its a synology NAS and custom docker updates bring a lot of other issues.
Here are my answers:
2.Last restart was 5 days ago.
In any case, its good to know that the latest update of docker fixes it since at some point in the future it will be fixed on my NAS.
It looks like it could be a problem with the Docker Engine API in version 24.
Not all machines on 24 have the issue though, so it's possible that a restart could help. At least for a bit.
If someone comes along with the same problem on version 27 then I'll put it to the top of my list. But right now it's low priority since it seems to be fixable with an upgrade.
Let me know if you can test this again. I'm curious if any of the updates we've made will fix this.
Also please try upgrading the Synology Container Manager if they've released an update.
Thanks!
Hello! I have tested this and while the issue is still here , it seems that the gaps are less frequent in the graphs.
Unfortunately Synology still uses the old docker package version 24 with no plans to update it as of yet.
Thank you!
This is happening to me still, I have lots of gaps in both of the docker graphs.
I updated to the Beta Synology Container manager (see below version), but the problem still occurs.
docker version
Client:
Version: 24.0.2
API version: 1.43
Go version: go1.20.4
Git commit: 610b8d0
Built: Thu Aug 1 07:07:08 2024
OS/Arch: linux/amd64
Context: default
Server:
Engine:
Version: 24.0.2
API version: 1.43 (minimum version 1.12)
Go version: go1.20.4
Git commit: b5710a2
Built: Thu Aug 1 07:07:31 2024
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.7.1
GitCommit: 067f5021280b8de2059026fb5c43c4adb0f3f244
runc:
Version: v1.1.7
GitCommit: adc1b13
docker-init:
Version: 0.19.0
GitCommit: ed96d00
It's a bug with the old version that Synology uses. I think I figured out a workaround and will try to release it tomorrow.
Let me know if 0.5.1 fixes this.
I can't test with Synology, but I was able to replicate and fix the issue using an LXC container running Docker 24.0.2.
Hmmm unfortunately not, this is what my graphs look like this morning after I updated last night.
For what it's worth, when I first installed Beszel I don't remember having this problem, and I was a pretty early adopter after seeing a post on r/selfhosted. I have added a lot of containers since then, so it could be my docker engine is just more overloaded now but figured i'd mention it. Happy to go back and test an older version if you have any suggestions.
Hmm from my side it seems fixed (Synology , container manager-docker version 24.0.2 running all day after update 0.5.1).
@ektorasdj Awesome! You might want to unsubscribe from thread notifications while I troubleshoot with Nathan.
@nathang21 First of all, impressive resource utilization. You're definitely getting your money's worth.
The problem is indeed related to the number of containers. 24.0.2 seems to have a bug with the one-shot
query parameter, so it can end up throttling for one second per container, which leads to a timeout if you have too many containers.
How many containers are you running in total, and do you see the same number of containers populate each time? If so, how many?
Can you try running the agent with the LOG_LEVEL
env var set to "debug" and make sure it says DEBUG Docker version=24.0.2 concurrency=200
?
I'm short on time tonight, but tomorrow I can give you a little bash script to verify that it's the same issue I was seeing.
Sounds like progress! @henrygd Thanks for noticing :) I will say I do run BOINC which uses a limited set of spare resources available to donate compute power towards research efforts, which smooths out the CPU load around 75% or so on average (as configured).
I have 60 containers running, and that number is fairly static (ie no dynamically scaling workloads), but is slowly trending up as I discover more things to self host in my addiction/hobby.
No rush at all, I just added the env variable, and I see the following logs which looks expected (followed by timeout spam):
2024/10/04 00:00:57 DEBUG Docker version=24.0.2 concurrency=200
2024/10/04 00:01:35 DEBUG Sending stats data="{Stats:{Cpu:70.18 Mem:17.42 MemUsed:6.18 MemPct:35.5 MemBuffCache:9.57 Swap:12.45 SwapUsed:4.83 DiskTotal:884.16 DiskUsed:829.07 DiskPct:93.77 DiskReadPs:25.45 DiskWritePs:111.15 NetworkSent:1.97 NetworkRecv:1.67 Temperatures:map[coretemp_core_0:63 coretemp_core_1:63 coretemp_core_2:63 coretemp_core_3:63 coretemp_physical_id_0:63] ExtraFs:map[]} Info:{Hostname:synology-nas KernelVersion:4.4.302+ Cores:4 Threads:4 CpuModel:Intel(R) Celeron(R) J4125 CPU @ 2.00GHz Uptime:547394 Cpu:70.18 MemPct:35.5 DiskPct:93.77 AgentVersion:0.5.1} Containers:[]}"
2024/10/04 00:01:35 ERROR Error getting container stats err="Get \"[http://localhost/containers/323201cdd8ac/stats?stream=0&one-shot=1\":](http://localhost/containers/323201cdd8ac/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
...
Cool, I'm going to look into setting that up. I remember doing folding@home on my PS3 back in the day.
Here are two bash scripts which should help figure out the problem. They both send 10 requests for container stats to Docker. One does it in sequence and the other in parallel.
First run the curl command to make sure it returns the stats properly. I'm using container ID 323201cdd8ac because it was in your logs, but swap it out with something else in docker ps
if the ID changed.
curl --unix-socket /var/run/docker.sock -H "Content-Type: application/json" "http://localhost/containers/323201cdd8ac/stats?stream=0&one-shot=1"
sequence.sh
Save this to sequence.sh
, make it executable chmod +x ./sequence.sh
, then run it with time: time ./sequence.sh
It should take 9 seconds to complete.
#!/bin/bash
for i in {1..10}
do
curl -s --unix-socket /var/run/docker.sock -H "Content-Type: application/json" "http://localhost/containers/323201cdd8ac/stats?stream=0&one-shot=1"
done
parallel.sh
Save this to parallel.sh
, make it executable chmod +x ./parallel.sh
, then run it with time: time ./parallel.sh
It should take 1 second to complete. Also try changing 10 to 60. It should still take only 1 second.
#!/bin/bash
for i in {1..10}
do
curl -s --unix-socket /var/run/docker.sock -H "Content-Type: application/json" "http://localhost/containers/323201cdd8ac/stats?stream=0&one-shot=1" &
done
wait
Please take your time, and I don't need the entire output. Just let me know if you see anything different in the timings.
Thanks!
Same yeah, well feel free to reach out if you run into trouble, the linuxserver.io image is the best I think for getting it running in Docker, it emulates the old school desktop app. curl test
curl --unix-socket /var/run/docker.sock -H "Content-Type: application/json" "http://localhost/containers/3e47c2d6a203/stats?stream=0&one-shot=1"
{"read":"2024-10-06T03:05:50.729191617Z","preread":"0001-01-01T00:00:00Z","pids_stats":{},"blkio_stats":{"io_service_bytes_recursive":[],"io_serviced_recursive":[],"io_queue_recursive":[],"io_service_time_recursive":[],"io_wait_time_recursive":[],"io_merged_recursive":[],"io_time_recursive":[],"sectors_recursive":[]},"num_procs":0,"storage_stats":{},"cpu_stats":{"cpu_usage":{"total_usage":109585487,"percpu_usage":[13058467,9094015,68392415,19040590],"usage_in_kernelmode":40000000,"usage_in_usermode":20000000},"system_cpu_usage":515213000000000,"online_cpus":4,"throttling_data":{"periods":0,"throttled_periods":0,"throttled_time":0}},"precpu_stats":{"cpu_usage":{"total_usage":0,"usage_in_kernelmode":0,"usage_in_usermode":0},"throttling_data":{"periods":0,"throttled_periods":0,"throttled_time":0}},"memory_stats":{"usage":180224,"max_usage":11902976,"stats":{"active_anon":61440,"active_file":28672,"cache":61440,"dirty":0,"hierarchical_memory_limit":9223372036854771712,"hierarchical_memsw_limit":9223372036854771712,"inactive_anon":61440,"inactive_file":28672,"mapped_file":28672,"pgfault":4516,"pgmajfault":32,"pgpgin":5151,"pgpgout":5107,"rss":118784,"rss_huge":0,"total_active_anon":61440,"total_active_file":28672,"total_cache":61440,"total_dirty":0,"total_inactive_anon":61440,"total_inactive_file":28672,"total_mapped_file":28672,"total_pgfault":4516,"total_pgmajfault":32,"total_pgpgin":5151,"total_pgpgout":5107,"total_rss":118784,"total_rss_huge":0,"total_unevictable":0,"total_writeback":0,"unevictable":0,"writeback":0},"limit":18702540800},"name":"/it-tools","id":"3e47c2d6a2030d78935c0f11785508d49732cd845715b02a03f09b5c75b2a115","networks":{"eth0":{"rx_bytes":5691558,"rx_packets":16472,"rx_errors":0,"rx_dropped":0,"tx_bytes":0,"tx_packets":0,"tx_errors":0,"tx_dropped":0}}}
I ran them all a few times, I was noticing some different results. Hope this lmks, lmk if you need more tests.
sequence test
./sequence.sh 0.11s user 0.07s system 0% cpu 4:49.35 total
./sequence.sh 0.10s user 0.07s system 0% cpu 1:13.66 total
./sequence.sh 0.10s user 0.06s system 0% cpu 34.202 total
parallel test 10
./parallel-10.sh 0.10s user 0.06s system 23% cpu 0.667 total
./parallel-10.sh 0.09s user 0.07s system 9% cpu 1.630 total
./parallel-10.sh 0.11s user 0.12s system 2% cpu 10.193 total
parallel test 60:
./parallel-60.sh 0.53s user 0.37s system 126% cpu 0.715 total
./parallel-60.sh 0.58s user 0.33s system 6% cpu 13.480 total
./parallel-60.sh 0.56s user 0.32s system 29% cpu 2.918 total
Thanks for doing that. Really strange results. Not sure how making 10 requests could take almost 5 minutes, or why the timings are so spread out.
My first thought is to check the health of the containers. Maybe one is stuck in a boot loop or has some other issue that is causing problems.
Try running docker ps -a
and look for any containers that have been up less than a minute or exited very recently.
Or maybe run ctop and look for anything irregular.
Thanks for sharing ctop, really great tool I hadn't seen before.
Indeed, I was noticing the past few days that everything was a little slow, I think my disk I/O was overloaded. I've taken significant steps to customize a bunch of my containers to prioritize using RAM instead of writing to disk, as well as reducing writes in general.
Everything feels more stable now, however the docker graphs in Beszel are still full of gaps (more gaps than actual data - Edit: Actually the graphs now are improved, more data than gaps, but still frequent gaps), and I just re-ran the same scripts, here is the trimmed output:
➜ scripts time ./sequence.sh && time ./parallel-10.sh && time ./parallel-60.sh
./sequence.sh 0.11s user 0.06s system 0% cpu 18.826 total
./parallel-10.sh 0.10s user 0.07s system 8% cpu 1.874 total
./parallel-60.sh 0.64s user 0.41s system 51% cpu 2.051 total
Does this look any better?
I increased the timeout to 2100ms in 0.5.3, so that's probably helping as well.
In the next release I'll use two different timeouts depending on version, and bump the older versions to eight seconds or so. That may finally fix it.
When I was testing docker 24, it was in an otherwise empty LXC container. So I was seeing the delay, but it was always consistent. In your case the inconsistency may be due to other programs also accessing the API and creating a queue. Which, again, is a bug does not happen in 25+.
Just an update after the latest release, unfortunately I'm now seeing gaps in all of the graphs, not just the docker ones, which appears to be a regression.
Will share logs and more details tomorrow when I'm at my computer.
Edit: See attached screenshot + snippet of logs. Let me know if this is helpful or if you need more specific details.
2024/10/18 17:03:04 DEBUG Getting stats
2024/10/18 17:03:04 DEBUG Temperatures data="[{\"sensorKey\":\"coretemp_physical_id_0\",\"temperature\":53,\"sensorHigh\":105,\"sensorCritical\":105} {\"sensorKey\":\"coretemp_core_0\",\"temperature\":54,\"sensorHigh\":105,\"sensorCritical\":105} {\"sensorKey\":\"coretemp_core_1\",\"temperature\":54,\"sensorHigh\":105,\"sensorCritical\":105} {\"sensorKey\":\"coretemp_core_2\",\"temperature\":54,\"sensorHigh\":105,\"sensorCritical\":105} {\"sensorKey\":\"coretemp_core_3\",\"temperature\":54,\"sensorHigh\":105,\"sensorCritical\":105}]"
2024/10/18 17:03:04 DEBUG System stats data="{Stats:{Cpu:44.92 MaxCpu:0 Mem:17.42 MemUsed:8.9 MemPct:51.08 MemBuffCache:7.28 MemZfsArc:0 Swap:12.45 SwapUsed:7.25 DiskTotal:884.16 DiskUsed:701.73 DiskPct:82.4 DiskReadPs:0.76 DiskWritePs:0 MaxDiskReadPs:0 MaxDiskWritePs:0 NetworkSent:0.59 NetworkRecv:0.34 MaxNetworkSent:0 MaxNetworkRecv:0 Temperatures:map[coretemp_core_0:54 coretemp_core_1:54 coretemp_core_2:54 coretemp_core_3:54 coretemp_physical_id_0:53] ExtraFs:map[]} Info:{Hostname:synology-nas KernelVersion:4.4.302+ Cores:4 Threads:4 CpuModel:Intel(R) Celeron(R) J4125 CPU @ 2.00GHz Uptime:549229 Cpu:44.92 MemPct:51.08 DiskPct:82.4 Bandwidth:0.93 AgentVersion:0.6.0} Containers:[]}"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/8f3d453987b4/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/28b462667af3/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/3e47c2d6a203/stats?stream=0&one-shot=1\":](http://localhost/containers/3e47c2d6a203/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/7ab568441aa3/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/7eecf932ffc5/stats?stream=0&one-shot=1\":](http://localhost/containers/7eecf932ffc5/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/17059c967cf1/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/e7cef2544b94/stats?stream=0&one-shot=1\":](http://localhost/containers/e7cef2544b94/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/d628e0529a66/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/096aee91e24f/stats?stream=0&one-shot=1\":](http://localhost/containers/096aee91e24f/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/a884ce559b64/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/3db2de37576c/stats?stream=0&one-shot=1\":](http://localhost/containers/3db2de37576c/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/a756b0cdde39/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/5ba7c0080eb4/stats?stream=0&one-shot=1\":](http://localhost/containers/5ba7c0080eb4/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/2ff4ce7efaae/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/2ad5c1d91872/stats?stream=0&one-shot=1\":](http://localhost/containers/2ad5c1d91872/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/fc91903c5374/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/f03cfd7a2311/stats?stream=0&one-shot=1\":](http://localhost/containers/f03cfd7a2311/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/6296f55ae4f6/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/91d136402318/stats?stream=0&one-shot=1\":](http://localhost/containers/91d136402318/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/f7db9af8030a/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/d6bd606ae562/stats?stream=0&one-shot=1\":](http://localhost/containers/d6bd606ae562/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/27b6c9da841a/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/e7d69c019b6f/stats?stream=0&one-shot=1\":](http://localhost/containers/e7d69c019b6f/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/5bab40356f4f/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/cda21d2d0084/stats?stream=0&one-shot=1\":](http://localhost/containers/cda21d2d0084/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/b8c0fb87dafb/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/8f4228c66169/stats?stream=0&one-shot=1\":](http://localhost/containers/8f4228c66169/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/6eee6c8bc087/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/f869fea16dfe/stats?stream=0&one-shot=1\":](http://localhost/containers/f869fea16dfe/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/5042fd9e7bda/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/60487d823eea/stats?stream=0&one-shot=1\":](http://localhost/containers/60487d823eea/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/1d58a1b1f267/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/89f2f4f03582/stats?stream=0&one-shot=1\":](http://localhost/containers/89f2f4f03582/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/e1b92b414a7a/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/c2da1c406160/stats?stream=0&one-shot=1\":](http://localhost/containers/c2da1c406160/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/2f5f42a30f02/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/e0602fed54fc/stats?stream=0&one-shot=1\":](http://localhost/containers/e0602fed54fc/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/e452c1b75aea/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/58bc8897efa7/stats?stream=0&one-shot=1\":](http://localhost/containers/58bc8897efa7/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/2c8df102bf4f/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/acdd8397659c/stats?stream=0&one-shot=1\":](http://localhost/containers/acdd8397659c/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/fcdc5da9496b/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/fb6b18c6aaa8/stats?stream=0&one-shot=1\":](http://localhost/containers/fb6b18c6aaa8/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/f0ca5dde925e/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/167fc6857b92/stats?stream=0&one-shot=1\":](http://localhost/containers/167fc6857b92/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/8dd388726f77/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/dffca3a1453d/stats?stream=0&one-shot=1\":](http://localhost/containers/dffca3a1453d/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/14bb90f94a50/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/18b88046d8ef/stats?stream=0&one-shot=1\":](http://localhost/containers/18b88046d8ef/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/cd264fb41697/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/644f9746d07b/stats?stream=0&one-shot=1\":](http://localhost/containers/644f9746d07b/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/54d677e7c4bf/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/c2213d86b19b/stats?stream=0&one-shot=1\":](http://localhost/containers/c2213d86b19b/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/383eb0dcf350/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/ae7efd051a07/stats?stream=0&one-shot=1\":](http://localhost/containers/ae7efd051a07/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/6aa341e80e99/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/48ace19fd195/stats?stream=0&one-shot=1\":](http://localhost/containers/48ace19fd195/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/de5e84fcb78c/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"[http://localhost/containers/43a64ea57a97/stats?stream=0&one-shot=1\":](http://localhost/containers/43a64ea57a97/stats?stream=0&one-shot=1\%22:) context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:05 ERROR Error getting container stats err="Get \"http://localhost/containers/640272d0836c/stats?stream=0&one-shot=1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
2024/10/18 17:03:18 DEBUG Docker stats data="[0xc000527180 0xc000540a80 0xc000540d20 0xc0005ec150 0xc000540ee0 0xc0005408c0 0xc000402070 0xc000540700 0xc0005412d0 0xc000540460 0xc000402150 0xc0005ec690 0xc000017f10 0xc0005ec1c0 0xc0005ec460 0xc0005ec850 0xc000402230 0xc000402380 0xc000540c40 0xc0005ec930 0xc000016d90 0xc000017030 0xc0005ec000 0xc0005ec3f0 0xc000017570 0xc0005409a0 0xc000402700 0xc0005413b0 0xc0005ec2a0 0xc0005ec5b0 0xc000540b60 0xc000541490 0xc000526e00 0xc0005407e0 0xc000526fc0 0xc000540540 0xc000540850 0xc00050e690 0xc0005405b0 0xc000526d20 0xc000540690 0xc00050eb60 0xc0005ec310 0xc000526150 0xc000402620 0xc0005eca10 0xc000526bd0 0xc000526f50 0xc000526070 0xc000402460 0xc000017ab0 0xc000526b60 0xc0005ec4d0 0xc000540e00 0xc0004028c0 0xc0005410a0 0xc0005ec0e0 0xc0004027e0 0xc000540fc0 0xc0005ec770]"
2024/10/18 17:03:18 DEBUG Extra filesystems data=map[]
2024/10/18 17:03:18 DEBUG Docker stats data="[0xc000541490 0xc000526e00 0xc0005407e0 0xc000526fc0 0xc000540540 0xc0005ec5b0 0xc000540b60 0xc00050e690 0xc0005405b0 0xc000526d20 0xc000540690 0xc000540850 0xc00050eb60 0xc0005ec310 0xc000526150 0xc000402620 0xc000526bd0 0xc000526f50 0xc000526070 0xc000402460 0xc0005eca10 0xc000017ab0 0xc000526b60 0xc0005ec4d0 0xc000540e00 0xc0004028c0 0xc0005410a0 0xc0005ec0e0 0xc0004027e0 0xc000540fc0 0xc0005ec770 0xc000527180 0xc000540a80 0xc000540d20 0xc0005ec150 0xc000540ee0 0xc0005408c0 0xc000402070 0xc000540700 0xc0005412d0 0xc000540460 0xc000402150 0xc0005ec690 0xc000017f10 0xc0005ec1c0 0xc0005ec460 0xc0005ec850 0xc000540c40 0xc0005ec930 0xc000016d90 0xc000017030 0xc0005ec000 0xc0005ec3f0 0xc000402230 0xc000402380 0xc000017570 0xc0005409a0 0xc000402700 0xc0005413b0 0xc0005ec2a0]"
2024/10/18 17:03:18 DEBUG Extra filesystems data=map[]
2024/10/18 17:03:18 DEBUG Docker stats data="[0xc0005410a0 0xc000540fc0 0xc0005ec770 0xc0005ec0e0 0xc0004027e0 0xc000540d20 0xc000527180 0xc000540a80 0xc0005ec150 0xc000540ee0 0xc000540700 0xc0005412d0 0xc0005408c0 0xc000402070 0xc0005ec690 0xc000540460 0xc000402150 0xc0005ec460 0xc0005ec850 0xc000017f10 0xc0005ec1c0 0xc0005ec000 0xc0005ec3f0 0xc000402230 0xc000402380 0xc000540c40 0xc0005ec930 0xc000016d90 0xc000017030 0xc000402700 0xc0005413b0 0xc000017570 0xc0005409a0 0xc0005ec2a0 0xc000526fc0 0xc000540540 0xc0005ec5b0 0xc000540b60 0xc000541490 0xc000526e00 0xc0005407e0 0xc000526d20 0xc000540690 0xc000540850 0xc00050e690 0xc0005405b0 0xc000526150 0xc000402620 0xc00050eb60 0xc0005ec310 0xc000526070 0xc000402460 0xc0005eca10 0xc000526bd0 0xc000526f50 0xc000017ab0 0xc000540e00 0xc0004028c0 0xc000526b60 0xc0005ec4d0]"
2024/10/18 17:03:18 DEBUG Extra filesystems data=map[]
2024/10/18 17:03:18 DEBUG Docker stats data="[0xc000017f10 0xc0005ec1c0 0xc0005ec460 0xc0005ec850 0xc000402230 0xc000402380 0xc000540c40 0xc0005ec930 0xc000016d90 0xc000017030 0xc0005ec000 0xc0005ec3f0 0xc000017570 0xc0005409a0 0xc000402700 0xc0005413b0 0xc0005ec2a0 0xc0005ec5b0 0xc000540b60 0xc000541490 0xc000526e00 0xc0005407e0 0xc000526fc0 0xc000540540 0xc000540850 0xc00050e690 0xc0005405b0 0xc000526d20 0xc000540690 0xc00050eb60 0xc0005ec310 0xc000526150 0xc000402620 0xc0005eca10 0xc000526bd0 0xc000526f50 0xc000526070 0xc000402460 0xc000017ab0 0xc000526b60 0xc0005ec4d0 0xc000540e00 0xc0004028c0 0xc0005410a0 0xc0005ec0e0 0xc0004027e0 0xc000540fc0 0xc0005ec770 0xc000527180 0xc000540a80 0xc000540d20 0xc0005ec150 0xc000540ee0 0xc0005408c0 0xc000402070 0xc000540700 0xc0005412d0 0xc000540460 0xc000402150 0xc0005ec690]"
2024/10/18 17:03:18 DEBUG Extra filesystems data=map[]
2024/10/18 17:03:18 DEBUG Docker stats data="[0xc000527180 0xc000540a80 0xc000540d20 0xc0005ec150 0xc000540ee0 0xc0005412d0 0xc0005408c0 0xc000402070 0xc000540700 0xc000540460 0xc000402150 0xc0005ec690 0xc0005ec850 0xc000017f10 0xc0005ec1c0 0xc0005ec460 0xc0005ec3f0 0xc000402230 0xc000402380 0xc000540c40 0xc0005ec930 0xc000016d90 0xc000017030 0xc0005ec000 0xc0005413b0 0xc000017570 0xc0005409a0 0xc000402700 0xc0005ec2a0 0xc000540540 0xc0005ec5b0 0xc000540b60 0xc000541490 0xc000526e00 0xc0005407e0 0xc000526fc0 0xc000540690 0xc000540850 0xc00050e690 0xc0005405b0 0xc000526d20 0xc000402620 0xc00050eb60 0xc0005ec310 0xc000526150 0xc000402460 0xc0005eca10 0xc000526bd0 0xc000526f50 0xc000526070 0xc000017ab0 0xc0004028c0 0xc000526b60 0xc0005ec4d0 0xc000540e00 0xc0005410a0 0xc0005ec770 0xc0005ec0e0 0xc0004027e0 0xc000540fc0]"
2024/10/18 17:03:18 DEBUG Extra filesystems data=map[]
2024/10/18 17:04:19 DEBUG Getting stats
2024/10/18 17:04:19 DEBUG Temperatures data="[{\"sensorKey\":\"coretemp_physical_id_0\",\"temperature\":58,\"sensorHigh\":105,\"sensorCritical\":105} {\"sensorKey\":\"coretemp_core_0\",\"temperature\":58,\"sensorHigh\":105,\"sensorCritical\":105} {\"sensorKey\":\"coretemp_core_1\",\"temperature\":58,\"sensorHigh\":105,\"sensorCritical\":105} {\"sensorKey\":\"coretemp_core_2\",\"temperature\":58,\"sensorHigh\":105,\"sensorCritical\":105} {\"sensorKey\":\"coretemp_core_3\",\"temperature\":58,\"sensorHigh\":105,\"sensorCritical\":105}]"
2024/10/18 17:04:19 DEBUG System stats data="{Stats:{Cpu:64.34 MaxCpu:0 Mem:17.42 MemUsed:8.77 MemPct:50.33 MemBuffCache:6.62 MemZfsArc:0 Swap:12.45 SwapUsed:7.32 DiskTotal:884.16 DiskUsed:701.78 DiskPct:82.41 DiskReadPs:1.69 DiskWritePs:0 MaxDiskReadPs:0 MaxDiskWritePs:0 NetworkSent:0.77 NetworkRecv:0.42 MaxNetworkSent:0 MaxNetworkRecv:0 Temperatures:map[coretemp_core_0:58 coretemp_core_1:58 coretemp_core_2:58 coretemp_core_3:58 coretemp_physical_id_0:58] ExtraFs:map[]} Info:{Hostname:synology-nas KernelVersion:4.4.302+ Cores:4 Threads:4 CpuModel:Intel(R) Celeron(R) J4125 CPU @ 2.00GHz Uptime:549304 Cpu:64.34 MemPct:50.33 DiskPct:82.41 Bandwidth:1.19 AgentVersion:0.6.0} Containers:[]}"
2024/10/18 17:04:19 DEBUG Docker stats data="[0xc0005408c0 0xc000402070 0xc000540700 0xc0005412d0 0xc000540460 0xc000402150 0xc0005ec690 0xc000017f10 0xc0005ec1c0 0xc0005ec460 0xc0005ec850 0xc000402230 0xc000402380 0xc000540c40 0xc0005ec930 0xc000016d90 0xc000017030 0xc0005ec000 0xc0005ec3f0 0xc000017570 0xc0005409a0 0xc000402700 0xc0005413b0 0xc0005ec2a0 0xc0005ec5b0 0xc000540b60 0xc000541490 0xc000526e00 0xc0005407e0 0xc000526fc0 0xc000540540 0xc000540850 0xc00050e690 0xc0005405b0 0xc000526d20 0xc000540690 0xc00050eb60 0xc0005ec310 0xc000526150 0xc000402620 0xc0005eca10 0xc000526bd0 0xc000526f50 0xc000526070 0xc000402460 0xc000017ab0 0xc000526b60 0xc0005ec4d0 0xc000540e00 0xc0004028c0 0xc0005410a0 0xc0005ec0e0 0xc0004027e0 0xc000540fc0 0xc0005ec770 0xc000527180 0xc000540a80 0xc000540d20 0xc0005ec150 0xc000540ee0]"
2024/10/18 17:04:19 DEBUG Extra filesystems data=map[]
2024/10/18 17:05:19 DEBUG Getting stats
2024/10/18 17:05:19 DEBUG Temperatures data="[{\"sensorKey\":\"coretemp_physical_id_0\",\"temperature\":58,\"sensorHigh\":105,\"sensorCritical\":105} {\"sensorKey\":\"coretemp_core_0\",\"temperature\":58,\"sensorHigh\":105,\"sensorCritical\":105} {\"sensorKey\":\"coretemp_core_1\",\"temperature\":58,\"sensorHigh\":105,\"sensorCritical\":105} {\"sensorKey\":\"coretemp_core_2\",\"temperature\":58,\"sensorHigh\":105,\"sensorCritical\":105} {\"sensorKey\":\"coretemp_core_3\",\"temperature\":58,\"sensorHigh\":105,\"sensorCritical\":105}]"
2024/10/18 17:05:19 DEBUG System stats data="{Stats:{Cpu:60.12 MaxCpu:0 Mem:17.42 MemUsed:8.8 MemPct:50.54 MemBuffCache:7.29 MemZfsArc:0 Swap:12.45 SwapUsed:7.24 DiskTotal:884.16 DiskUsed:701.79 DiskPct:82.41 DiskReadPs:1.27 DiskWritePs:0 MaxDiskReadPs:0 MaxDiskWritePs:0 NetworkSent:1.44 NetworkRecv:0.61 MaxNetworkSent:0 MaxNetworkRecv:0 Temperatures:map[coretemp_core_0:58 coretemp_core_1:58 coretemp_core_2:58 coretemp_core_3:58 coretemp_physical_id_0:58] ExtraFs:map[]} Info:{Hostname:synology-nas KernelVersion:4.4.302+ Cores:4 Threads:4 CpuModel:Intel(R) Celeron(R) J4125 CPU @ 2.00GHz Uptime:549364 Cpu:60.12 MemPct:50.54 DiskPct:82.41 Bandwidth:2.05 AgentVersion:0.6.0} Containers:[]}"
2024/10/18 17:05:19 DEBUG Docker stats data="[0xc000017570 0xc0005409a0 0xc000402700 0xc0005413b0 0xc0005ec2a0 0xc000526e00 0xc0005407e0 0xc000526fc0 0xc000540540 0xc0005ec5b0 0xc000540b60 0xc000541490 0xc00050e690 0xc0005405b0 0xc000526d20 0xc000540690 0xc000540850 0xc00050eb60 0xc0005ec310 0xc000526150 0xc000402620 0xc000526bd0 0xc000526f50 0xc000526070 0xc000402460 0xc0005eca10 0xc000017ab0 0xc000526b60 0xc0005ec4d0 0xc000540e00 0xc0004028c0 0xc0005410a0 0xc0005ec0e0 0xc0004027e0 0xc000540fc0 0xc0005ec770 0xc000527180 0xc000540a80 0xc000540d20 0xc0005ec150 0xc000540ee0 0xc0005408c0 0xc000402070 0xc000540700 0xc0005412d0 0xc000540460 0xc000402150 0xc0005ec690 0xc000017f10 0xc0005ec1c0 0xc0005ec460 0xc0005ec850 0xc0005ec930 0xc000016d90 0xc000017030 0xc0005ec000 0xc0005ec3f0 0xc000402230 0xc000402380 0xc000540c40]"
2024/10/18 17:05:19 DEBUG Extra filesystems data=map[]
2024/10/18 17:06:19 DEBUG Getting stats
2024/10/18 17:06:19 DEBUG Temperatures data="[{\"sensorKey\":\"coretemp_physical_id_0\",\"temperature\":60,\"sensorHigh\":105,\"sensorCritical\":105} {\"sensorKey\":\"coretemp_core_0\",\"temperature\":60,\"sensorHigh\":105,\"sensorCritical\":105} {\"sensorKey\":\"coretemp_core_1\",\"temperature\":60,\"sensorHigh\":105,\"sensorCritical\":105} {\"sensorKey\":\"coretemp_core_2\",\"temperature\":60,\"sensorHigh\":105,\"sensorCritical\":105} {\"sensorKey\":\"coretemp_core_3\",\"temperature\":60,\"sensorHigh\":105,\"sensorCritical\":105}]"
2024/10/18 17:06:19 DEBUG System stats data="{Stats:{Cpu:63.57 MaxCpu:0 Mem:17.42 MemUsed:8.81 MemPct:50.56 MemBuffCache:7.29 MemZfsArc:0 Swap:12.45 SwapUsed:7.25 DiskTotal:884.16 DiskUsed:701.81 DiskPct:82.41 DiskReadPs:1.36 DiskWritePs:0 MaxDiskReadPs:0 MaxDiskWritePs:0 NetworkSent:1.67 NetworkRecv:1.05 MaxNetworkSent:0 MaxNetworkRecv:0 Temperatures:map[coretemp_core_0:60 coretemp_core_1:60 coretemp_core_2:60 coretemp_core_3:60 coretemp_physical_id_0:60] ExtraFs:map[]} Info:{Hostname:synology-nas KernelVersion:4.4.302+ Cores:4 Threads:4 CpuModel:Intel(R) Celeron(R) J4125 CPU @ 2.00GHz Uptime:549424 Cpu:63.57 MemPct:50.56 DiskPct:82.41 Bandwidth:2.72 AgentVersion:0.6.0} Containers:[]}"
2024/10/18 17:06:20 DEBUG Docker stats data="[0xc0005ec2a0 0xc0005ec5b0 0xc000540b60 0xc000541490 0xc000526e00 0xc0005407e0 0xc000526fc0 0xc000540540 0xc000540850 0xc00050e690 0xc0005405b0 0xc000526d20 0xc000540690 0xc00050eb60 0xc0005ec310 0xc000526150 0xc000402620 0xc0005eca10 0xc000526bd0 0xc000526f50 0xc000526070 0xc000402460 0xc000017ab0 0xc000526b60 0xc0005ec4d0 0xc000540e00 0xc0004028c0 0xc0005410a0 0xc0005ec0e0 0xc0004027e0 0xc000540fc0 0xc0005ec770 0xc000527180 0xc000540a80 0xc000540d20 0xc0005ec150 0xc000540ee0 0xc0005408c0 0xc000402070 0xc000540700 0xc0005412d0 0xc000540460 0xc000402150 0xc0005ec690 0xc000017f10 0xc0005ec1c0 0xc0005ec460 0xc0005ec850 0xc000402230 0xc000402380 0xc000540c40 0xc0005ec930 0xc000016d90 0xc000017030 0xc0005ec000 0xc0005ec3f0 0xc000017570 0xc0005409a0 0xc000402700 0xc0005413b0]"
2024/10/18 17:06:20 DEBUG Extra filesystems data=map[]
2024/10/18 17:07:19 DEBUG Getting stats
2024/10/18 17:07:19 DEBUG Temperatures data="[{\"sensorKey\":\"coretemp_physical_id_0\",\"temperature\":59,\"sensorHigh\":105,\"sensorCritical\":105} {\"sensorKey\":\"coretemp_core_0\",\"temperature\":59,\"sensorHigh\":105,\"sensorCritical\":105} {\"sensorKey\":\"coretemp_core_1\",\"temperature\":59,\"sensorHigh\":105,\"sensorCritical\":105} {\"sensorKey\":\"coretemp_core_2\",\"temperature\":59,\"sensorHigh\":105,\"sensorCritical\":105} {\"sensorKey\":\"coretemp_core_3\",\"temperature\":58,\"sensorHigh\":105,\"sensorCritical\":105}]"
2024/10/18 17:07:19 DEBUG System stats data="{Stats:{Cpu:66.06 MaxCpu:0 Mem:17.42 MemUsed:9.03 MemPct:51.82 MemBuffCache:6.57 MemZfsArc:0 Swap:12.45 SwapUsed:7.29 DiskTotal:884.16 DiskUsed:701.83 DiskPct:82.41 DiskReadPs:5.92 DiskWritePs:0 MaxDiskReadPs:0 MaxDiskWritePs:0 NetworkSent:10.25 NetworkRecv:0.84 MaxNetworkSent:0 MaxNetworkRecv:0 Temperatures:map[coretemp_core_0:59 coretemp_core_1:59 coretemp_core_2:59 coretemp_core_3:58 coretemp_physical_id_0:59] ExtraFs:map[]} Info:{Hostname:synology-nas KernelVersion:4.4.302+ Cores:4 Threads:4 CpuModel:Intel(R) Celeron(R) J4125 CPU @ 2.00GHz Uptime:549484 Cpu:66.06 MemPct:51.82 DiskPct:82.41 Bandwidth:11.09 AgentVersion:0.6.0} Containers:[]}"
2024/10/18 17:07:20 DEBUG Docker stats data="[0xc000526bd0 0xc000526f50 0xc000526070 0xc000402460 0xc0005eca10 0xc000017ab0 0xc000526b60 0xc0005ec4d0 0xc000540e00 0xc0004028c0 0xc0005410a0 0xc0005ec0e0 0xc0004027e0 0xc000540fc0 0xc0005ec770 0xc000527180 0xc000540a80 0xc000540d20 0xc0005ec150 0xc000540ee0 0xc0005408c0 0xc000402070 0xc000540700 0xc0005412d0 0xc000540460 0xc000402150 0xc0005ec690 0xc000017f10 0xc0005ec1c0 0xc0005ec460 0xc0005ec850 0xc000016d90 0xc000017030 0xc0005ec000 0xc0005ec3f0 0xc000402230 0xc000402380 0xc000540c40 0xc0005ec930 0xc000017570 0xc0005409a0 0xc000402700 0xc0005413b0 0xc0005ec2a0 0xc000526e00 0xc0005407e0 0xc000526fc0 0xc000540540 0xc0005ec5b0 0xc000540b60 0xc000541490 0xc00050e690 0xc0005405b0 0xc000526d20 0xc000540690 0xc000540850 0xc00050eb60 0xc0005ec310 0xc000526150 0xc000402620]"
2024/10/18 17:07:20 DEBUG Extra filesystems data=map[]
I added an env var DOCKER_TIMEOUT
in 0.6.1. You can try tweaking this to see if you can find a sweet spot where it returns data more reliably. Use values like 4s
or 3200ms
.
It looks like you're getting queue pileups that overlap time windows from Beszel requests.
I just thought of one other thing to try that I'll put in the next release.
Cool thanks, I've started playing around with this but so far haven't noticed a big difference. I assume there are dimminition returns of increasing the timeout, but not quite sure how to think about it. I've tried 15s and 30s respectively but both still had gaps (now in all graphs as mentioned previously, not just the docker graphs).
Let me know if you have any more success with 0.6.2. :crossed_fingers:
I'd also recommend pausing the system for a bit in case the API queue is really jammed up.
If you want to try tweaking DOCKER_TIMEOUT
again, your best bet is probably something between 2s
and 6s
.
0.6.2 does seem better, although I paused for a bit to clear the queue, hadn't thought about that. I also reduced the timeout to 5s
, and now the regular graphs don't have gaps, but the docker graphs still have occasional ones.
Not sure which is the primary improvement (maybe a combination). It's definitely much more useable now, but not 100% resolved, this may be the limits we can reach on docker v24?
Edit: Spoke to soon, still seeing gaps in both graphs, and much more now in docker.
Try dropping DOCKER_TIMEOUT
down to 3s
and comment out LOG_LEVEL=debug
if you have it active.
Let me know if you still have gaps after that.
I have a couple other things to try.
Done, and I did have debug logging enabled which I removed.
That seemingly helped, but still not in the clear totally. Let me know if you have other suggestions.
Thanks for being so diligent with this.
No worries, let me know if you start getting gaps in the non-docker metrics again. Or if anything else changes in a significant way.
I'll continue tweaking things on this end.
Still seeing occasional gaps in non-docker metrics and docker metrics, the latter being much more common.
Overall it's definitely much more stable, and may be related to the I/O bottlenecks on my system discussed previously (which I may explore continuing to optimize).
Thanks for the update. I'll have another possible fix in the next release.
I think the most likely explanation is that another service you host is also requesting info from Docker.
Every one minute, the agent tries to ask docker for the stats. Sometimes timing is bad and it hits a queue that was made by the other service.
If this is the case, it may be impossible to fully fix, but we can try to make it as stable as possible.
@nathang21 I decided not to include the tweaks for old docker versions in the 0.7.0 release because I got sidetracked with localization and didn't have time to test it as much as I wanted to.
If you want to test manually, you can do this:
# image: "henrygd/beszel-agent:latest"
image: "beszel-agent:latest
wget https://henrygd-assets.b-cdn.net/beszel/bin/beszel-agent-image.zst
docker load -i ./beszel-agent-image.zst
docker compose up -d
This may also be a way to test different things in a more controlled way, rather than including the changes in every new release.
Sounds like good plan, thanks for all the effort on this 🙏.
FWIW Just ran the above commands, doesn't seem to be a major difference so far:
Hello,
Thank you for this nice tool. Unfortunately, I have some containers (created through Portainer) that are not continuously showing in the graphs. For example, check OpenProject, which is continuously running with no issues, but it disappears from the graphs for some time.