Closed eatwithforks closed 3 years ago
We set limit proportional to request: https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#keeping-limit-proportional-to-request
Can you check your deployment settings?
Right, I understand the theoretical 2:1 ratio for limits to requests but let's look at the behavior in actual for my pod.
My deployment has:
memory request: 512Mi
memory limit: 4 Gi
VPA request recommendations are:
target memory: 4066212754 (~= 4Gi)
My pod after it's been evicted and recreated by the admission-controller has:
memory: "32529702032"** (~=30 Gi)
That's not a 2:1 ratio of limits to requests, right?
The full manifest details are in the initial ticket description
This datadog notebook shows the behavior before VPA's recommend mode was switched from "Off" to "Auto" You can clearly see the memory limit value rocket to 30 Gi while the actual usage and request remains constantly low.
Apologies, I totally missed that the info was already there.
We are not sticking to 2:1 ratio but rather take the ratio from the deployment.
If the config is
memory request: 512Mi
memory limit: 4 Gi
That is 8:1 ratio, so if target recommendation is ~4Gi, limit will be set to ~32Gi which seems to be aligned.
ahh i see now. that's good to know! i'll tune the deployment limit/request to be 2:1 ratio
Let me know if that works and also if there's a way to improve the docs, feel free to file an issue/contribute.
/close
@bskiba: Closing this issue.
I have vpa object's recommend mode turned to "Auto" and my pod's memory usage is ~3-4 gb but my pod's limits set by the admission-controller is 30 gb. Can anyone take a look and see if there's a bug or my configuration is incorrect.
cc @bskiba
kubectl get pod <my_pod>
kubectl get statefulset <foo> -o yaml
VPA
admission-controller yaml: