Closed sempi closed 4 years ago
See https://github.com/soyuka/pidusage/issues/58 I think that this is related.
The issue is not that it reports 0-1200 for a 12 core system. The issue is that it reports >1200 for a 12 core system. Issue #58 primarily talks about the difference for reporting 100% vs 1200% usage for an example 12 core system.
In our case the reading is not consistent. For example, a multi threaded program is reported typically as 800% when it uses the equivalent of 8 cores. However, every once in a while we get a reading of 3600% which is physically not possible thus there must be an issue with the approach, e.g. timestamp not accurately matching to reading. So the pidusage return values are 800%, 800%, 3600%, 600%, 800% when we poll every 2 seconds. B/c we poll at 2 second intervals, it is unlikely a observation interval issue.
Interesting, may you try to force using the ps
method instead of using procfiles? (see https://github.com/soyuka/pidusage/blob/master/lib/stats.js#L11)
It can be a workaround until I can investigate what's wrong with the procfiles interpretation.
Having a similar issue on Windows. Some things I have figured out.
const {exec} = require('child_process'); exec('wmic process where "ProcessId=' + process.pid + '" CALL setpriority 256')
to the beginning of the monitor process also improved results.Interesting findings. I wish we could have another API then wmic
to get these information though as wmic
consume lots of resources.
Sometimes the cpu usage is >100%*vCPU. E.g. a 12 core vCPU instance reports 3600% usage.
We use a 2 second poll interval to invoke pidusage (in part to avoid issues with small observation intervals). Every once in a while, repeatable across large number of instances, pidusage will report an CPU usage that is larger than possible given the number of vCPUs on the system.
OS: virtualized Debian Platform: gcp