Closed fkamaliada closed 1 month ago
Hi @fkamaliada, you have RemovePodsViolatingTopologySpreadConstraint enabled under Balance and Deschedule, but it's only a Balance plugin. Try removing it from the Deschedule section of your config to see if that fixes it
Thank you @damemi for your helpful reply. Seems that for the specific issue that was the problem. I also deleted from balance the plugin RemovePodsViolatingNodeTaints, and the error gone...
plugins:
balance:
enabled:
- RemoveDuplicates
- RemovePodsViolatingTopologySpreadConstraint
- LowNodeUtilization
deschedule:
enabled:
- RemovePodsHavingTooManyRestarts
- RemovePodsViolatingNodeTaints
- RemovePodsViolatingInterPodAntiAffinity
Now in logs, there aren't something similar.
I0530 13:34:02.118912 1 pod_antiaffinity.go:93] "Processing node" node="ip-192-168-93-80.eu-west-1.compute.internal"
I0530 13:34:02.118994 1 profile.go:321] "Total number of pods evicted" extension point="Deschedule" evictedPods=0
I0530 13:34:02.119013 1 removeduplicates.go:107] "Processing node" node="ip-192-168-18-35.eu-west-1.compute.internal"
I0530 13:34:02.119150 1 removeduplicates.go:107] "Processing node" node="ip-192-168-72-233.eu-west-1.compute.internal"
I0530 13:34:02.119239 1 removeduplicates.go:107] "Processing node" node="ip-192-168-93-80.eu-west-1.compute.internal"
I0530 13:34:02.119317 1 profile.go:349] "Total number of pods evicted" extension point="Balance" evictedPods=0
I0530 13:34:02.119335 1 topologyspreadconstraint.go:122] Processing namespaces for topology spread constraints
I0530 13:34:02.119525 1 topologyspreadconstraint.go:221] "Skipping topology constraint because it is already balanced" constraint={"MaxSkew":1,"TopologyKey":"topology.kubernetes.io/zone","Selector":[{}],"NodeAffinityPolicy":"Honor","NodeTaintsPolicy":"Ignore","PodNodeAffinity":{},"PodTolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]}
I0530 13:34:02.119658 1 topologyspreadconstraint.go:221] "Skipping topology constraint because it is already balanced" constraint={"MaxSkew":1,"TopologyKey":"topology.kubernetes.io/zone","Selector":[{}],"NodeAffinityPolicy":"Honor","NodeTaintsPolicy":"Ignore","PodNodeAffinity":{},"PodTolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]}
I0530 13:34:02.119700 1 topologyspreadconstraint.go:221] "Skipping topology constraint because it is already balanced" constraint={"MaxSkew":1,"TopologyKey":"topology.kubernetes.io/zone","Selector":[{}],"NodeAffinityPolicy":"Honor","NodeTaintsPolicy":"Ignore","PodNodeAffinity":{},"PodTolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]}
I0530 13:34:02.119753 1 profile.go:349] "Total number of pods evicted" extension point="Balance" evictedPods=0
I0530 13:34:02.119857 1 nodeutilization.go:207] "Node is overutilized" node="ip-192-168-18-35.eu-west-1.compute.internal" usage={"cpu":"1560m","memory":"3086Mi","pods":"38"} usagePercentage={"cpu":80.83,"memory":43.61,"pods":34.55}
I0530 13:34:02.119920 1 nodeutilization.go:207] "Node is overutilized" node="ip-192-168-72-233.eu-west-1.compute.internal" usage={"cpu":"1700m","memory":"2982Mi","pods":"30"} usagePercentage={"cpu":88.08,"memory":42.14,"pods":27.27}
I0530 13:34:02.119933 1 nodeutilization.go:207] "Node is overutilized" node="ip-192-168-93-80.eu-west-1.compute.internal" usage={"cpu":"1090m","memory":"1312Mi","pods":"15"} usagePercentage={"cpu":56.48,"memory":18.54,"pods":13.64}
I0530 13:34:02.119947 1 lownodeutilization.go:135] "Criteria for a node under utilization" CPU=10 Mem=20 Pods=10
I0530 13:34:02.119962 1 lownodeutilization.go:136] "Number of underutilized nodes" totalNumber=0
I0530 13:34:02.119972 1 lownodeutilization.go:149] "Criteria for a node above target utilization" CPU=12 Mem=60 Pods=15
I0530 13:34:02.119981 1 lownodeutilization.go:150] "Number of overutilized nodes" totalNumber=3
I0530 13:34:02.119990 1 lownodeutilization.go:153] "No node is underutilized, nothing to do here, you might tune your thresholds further"
I0530 13:34:02.120003 1 profile.go:349] "Total number of pods evicted" extension point="Balance" evictedPods=0
I0530 13:34:02.120016 1 descheduler.go:170] "Number of evicted pods" totalEvicted=0
I0530 13:34:02.120232 1 reflector.go:302] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160
I0530 13:34:02.120377 1 reflector.go:302] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160
I0530 13:34:02.120498 1 reflector.go:302] Stopping reflector *v1.PriorityClass (0s) from k8s.io/client-go/informers/factory.go:160
I0530 13:34:02.120591 1 reflector.go:302] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160
I0530 13:34:02.120854 1 secure_serving.go:258] Stopped listening on [::]:10258
I0530 13:34:02.120907 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
But, in the end it seems to do nothing about the balancing. Still the same node behavior as before. If someone has any idea on what prevents the balancing, please let me know. I'll keep trying anyway.
@fkamaliada it looks like you are trying to balance based on LowNodeUtilization? If so, see this line in the logs:
I0530 13:34:02.119990 1 lownodeutilization.go:153] "No node is underutilized, nothing to do here, you might tune your thresholds further"
This means there aren't any nodes that fall under the thresholds
for all 3 resource types (cpu, memory, pods). LowNodeUtilization will only evict pods from over-utilized nodes if there is a matching under-utilized node for the new pods to be scheduled onto. You can try adjusting your threshold settings to get the balance you want. Please see the LowNodeUtilization docs for more details about how this works.
Thanks again @damemi
You're right. My mistake was that I thought that the numbers for pods, was count type, but they are percentage of (current pods / max pods capability of node).
Also cpu usage or memory usage is % of reserved measures and not used. So I was seen quite little cpu usage (5,5%) but the descheduler was pointing to about 60% cpu for a node, and that was the reserved cpu.
Now, I'll have to find the optimal values.
Thank you very much!
Hi, I'm new to Descheduler so, please forgive my noobish question:
I installed descheduler to an AWS EKS cluster in order to balance some nodes usage, since e.g from the 3 nodes, the one uses 20%CPU and 95% Ram, while others has 5-6% cpu and 50% ram. I did the installation through helm and made to get rid of of some errors changing the default policy as:
The thing is that I'm getting the following in the logs:
descheduler.go:193] "unable to create a profile" err="profile \"test\" configures deschedule extension point of non-existing plugins: map[RemovePodsViolatingTopologySpreadConstraint:{}]" profile="test
"full log of a pod:
But the RemovePodsViolatingTopologySpreadConstraint seems to be configured. What I'm missing here?
Thanks.