Open clux opened 6 months ago
This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted
label.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Did some digging and I can indeed find one pod whose req/limits for cpu/mem respectively in the same column shows:
ns one-pod-xxxx 151m (0%) 4500m (28%) 195454566400m (0%)
and searching through deployments I did find this request for memory 🙃
limits:
cpu: "4"
memory: 4Gi
requests:
cpu: 1m
memory: 107374182400m
so in short, possibly my fault. not sure how this got there.
on the other hand, it's reasonable to want a human readable number in a human readable page?
oh, wait, i see now, in my root yaml it says:
requests:
cpu: 1m
memory: 0.1Gi
which means 0.1Gi of memory gets converted to millibytes because it's not perfectly roundable;
> 0.1*1024*1024*1024*1000
107374182400
and this rounding based sub-byte number is then probably propagating through internal calculations 🙃
so this does not feel like a huge user error (0.1Gi feels like a reasonable way to represent memory), but it leads to hard to read kubectl describe
output. maybe kubectl describe
should give you Megabytes as a smallest default for memory, perhaps?
I see the same thing, and also get a warning. Here is what I did:
$ kubectl apply -f - << EOF
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- image: registry.k8s.io/pause
name: pause
resources:
requests:
cpu: 1m
memory: 0.1Gi
EOF
Warning: spec.containers[0].resources.requests[memory]: fractional byte value "107374182400m" is invalid, must be an integer
pod/foo created
$ kubectl get pod foo -o json | jq .spec.containers[0].resources
{
"requests": {
"cpu": "1m",
"memory": "107374182400m"
}
}
Are you able to use G
instead of Gi
?
$ kubectl apply -f - << EOF
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- image: registry.k8s.io/pause
name: pause
resources:
requests:
cpu: 1m
memory: 0.1G
EOF
pod/foo created
$ kubectl get pod foo -o json | jq .spec.containers[0].resources
{
"requests": {
"cpu": "1m",
"memory": "100M"
}
}
Regardless, it seems like a pretty bad way to show the node resource usage when you do kubectl describe node
.
I'm not sure if that is kubectl formatting the quantity or if it comes from the API already formatted :eyes:
The warning is probably missed for most people who are using an automated CI environment; no one sees it unless it prevents it from being applied.
or if it comes from the API already formatted
ran kubectl get thatpod -v=15
and can see it is initially formatted in the response body line:
61421 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1",
...
"requests":{"cpu":"1m","ephemeral-storage":"500Mi","memory":"107374182400m"}},"
...
but this is also not meant to be human readable, i guess?
It looks like requests/limits are parsed into a Quantity type.
From there, it has some functions that looks like it supports different "scales", so allowing for the conversion between different units.
For kubectl describe pod
the code to print resource limits and requests is here (There is similar code for kubectl describe node
in this same file):
https://github.com/brianpursley/kubernetes/blob/9d945ba5a520438ac8cf7a77200ae6a8d2d8bd4b/staging/src/k8s.io/kubectl/pkg/describe/describe.go#L1881-L1898
So there should be the ability to detect and handle the display format, if it can be decided how it should be done. Is it just detecting when memory is fractional and rounding up to bytes? So in the case of 107374182400m
that becomes 107374183
I suppose it could also print a warning in the describe output saying that a fractional byte quantity was detected.
Ideally, resources should not specify memory or storage quantities this way at all, since there isn't really any meaning to a fractional byte.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Ideally, resources should not specify memory or storage quantities this way at all, since there isn't really any meaning to a fractional byte.
We could emit a warning at admission time when this is detected. And then either round up or leave it as-is.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/assign I want to take up this issue
I’ve implemented a feature that rounds memory requests to the nearest unit, providing a human-readable output
For example, if 0.1Gi
is specified as a memory request, it’s rounded to 102Mi
in the output. This change ensures consistency and prevents confusion with milli-units (m
) for memory by converting values directly into familiar units like Mi
or Gi
. This makes the allocation much easier to interpret in kubectl describe
outputs.
via
kubectl describe node XXX
:This can happens when some pods request sub-byte memory accidentally.
I saw a pod requesting
0.1Gi
of memory.which shows up in kubernetes as:
This phenomenon happens here because 10 does not divide a perfect power of 2:
This sub-byte number propagates throughout and is visible in the
descibe node
output as above, as well as thedescribe pod
output.Expected Behaviour: Rounding to nearest actual unit (bytes) or a human readable output with 3 significant digits on the biggest unit.
Environment: EKS 1.29, kubectl client at 1.30.0