Open cdalexndr opened 4 years ago
This is expected behavior on multi-core host
I'm having a similar issue with geoserver 2.14.3 , but I can't figure it out.
This is not an issue. If you have N
CPU cores, the CPU usage can be up to N * 100
%.
I don't think there is a system monitor tool (linux or windows) that shows CPU usage above 100%. Conceptually, 100 percent means maximum value.
The main issue is that geoserver is consuming all of these resource when nothing is being processed at all! not even accessing it through the web. Shouldn't geoserver in his idle state just be consuming memory and not a lot of CPU ?!
I don't think there is a system monitor tool (linux or windows) that shows CPU usage above 100%.
systemd-cgtop
Shouldn't geoserver in his idle state just be consuming memory and not a lot of CPU ?!
Seems an issue on geoserver.
Cpu % is calculated using deltas of the total_usage
as per https://github.com/docker/cli/blob/6c12a82f330675d4e2cfff4f8b89a353bcb1fecd/cli/command/container/stats_helpers.go#L180
Here the ratio is multiplied by the number of CPUs. However, cpuDelta
already includes usage across all CPUs (see below).
Adding all values in percpu_usage
gives total_usage
(which is used to derive cpuDelta
).
"cpu_stats": {
"cpu_usage": {
"percpu_usage": [
826860687,
830807540,
823365887,
844077056
],
"total_usage": 3325111170,
"usage_in_kernelmode": 1620000000,
"usage_in_usermode": 1600000000
},
"online_cpus": 4,
"system_cpu_usage": 35595977360000000,
"throttling_data": {
"periods": 0,
"throttled_periods": 0,
"throttled_time": 0
}
},
So what is the reason to multiply by the number of cpus? Seem that is not required because total_usage
already accounts for them.
I found and resolved the issue. I had my health check in my docker-compose.yml configured to try every 2 seconds. I changed it to 3 minutes and now everything is fine.
Ok, but the logic above doesn't seem right. @AkihiroSuda could you please review? I couldn't find anything in git blame.
Just adding that this may not be related to the original issue - I can open a new issue if required.
Looks strange to me, as I need to google why my containers get above 100% and do math calculations with cores.
Maybe better to have there two things, one is common cpu usage with max 100% and other there it's like load
for cores
We're trying to figure out if our container is using CPUs in a healthy way. Can someone clarify how we can do this on a multicore machine? For instance, If i'm understanding correctly, if you have 4 cores and 100% CPU usage, then that's either the 4 cores running at 25% each OR 1 core running at 100%? The former seems "healthy" while the latter is potentially a problem (at least in our use case).
@frankandrobot you can connect to the API endpoint to get the raw information ; https://docs.docker.com/engine/api/v1.41/#operation/ContainerStats
As I understand, docker stats
gives the CPU percentage relative to the allocated CPU resources. ( i.e. cpu-shares
).
But here, cpu-shares
itself is a relative value used for scheduling CPU time between different containers.
Therefore, we can't directly get an absolute measure of CPU utilization (i.e., how much of the host's CPU capacity a container is using) from docker stats. Instead, it shows us how much of the container's allocated CPU resources are being used.
Description docker stats CPU shows values above 100%.
Steps to reproduce the issue:
Describe the results you received: CPU column shows values above 100% (110%, 250%...)
Describe the results you expected: CPU column values should be normalized to 100%. Conceptually, header CPU % means max 100%.
Additional information you deem important (e.g. issue happens only occasionally):
Output of
docker version
:Output of
docker info
:Additional environment details (AWS, VirtualBox, physical, etc.): Using 4 core CPU.