Closed ndipebot closed 1 year ago
This issue is currently awaiting triage.
If usage-metrics-collector contributors determine this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Hi @ndipebot. Thanks for your PR.
I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test
on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test
label.
I understand the commands that are listed here.
/ok-to-test
/ok-to-test
/ok-to-test
/ok-to-test
/retest-required
/test pull-usage-metrics-collector-verify
/ok-to-test
/test pull-usage-metrics-collector-test
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: Dbz, ndipebot, pwittrock
The full list of commands accepted by this bot can be found here.
The pull request process is described here
/lgtm
nr_periods
: number of periods that any thread in the cgroup was runnablenr_throttled
: number of runnable periods in which the application used its entire quota and was throttledoom_kill
: OOM Kill counternr_periods
andnr_throttled
were computed as a ratescpuPeriodsSec
andcpuThrottledPeriodsSec
respectively. This makes it easier to do aggregation at different levels and throttle percentage can be calculated using:$$ throttle \% = {nr-periods-per-second \over nr-throttled-per-second} $$