kubernetes-sigs / descheduler

Descheduler for Kubernetes
https://sigs.k8s.io/descheduler
Apache License 2.0
4.23k stars 645 forks source link

Unable to create a profile err="profile \"test\" configures deschedule extension point of non-existing plugins: map[RemovePodsViolatingTopologySpreadConstraint:{}]" #1422

Closed fkamaliada closed 1 month ago

fkamaliada commented 1 month ago

Hi, I'm new to Descheduler so, please forgive my noobish question:

I installed descheduler to an AWS EKS cluster in order to balance some nodes usage, since e.g from the 3 nodes, the one uses 20%CPU and 95% Ram, while others has 5-6% cpu and 50% ram. I did the installation through helm and made to get rid of of some errors changing the default policy as:

apiVersion: "descheduler/v1alpha2"
kind: "DeschedulerPolicy"
profiles:
- name: test
  pluginConfig:
  - args:
      evictLocalStoragePods: true
      ignorePvcPods: true
    name: DefaultEvictor
  - name: RemoveDuplicates
  - args:
      includingInitContainers: true
      podRestartThreshold: 100
    name: RemovePodsHavingTooManyRestarts
  - name: RemovePodsViolatingNodeTaints
  - name: RemovePodsViolatingInterPodAntiAffinity
  - name: RemovePodsViolatingTopologySpreadConstraint
  - args:
      targetThresholds:
        cpu: 12
        memory: 60
        pods: 15
      thresholds:
        cpu: 10
        memory: 20
        pods: 10
    name: LowNodeUtilization
  plugins:
    balance:
      enabled:
      - RemoveDuplicates
      - RemovePodsViolatingNodeTaints
      - RemovePodsViolatingTopologySpreadConstraint
      - LowNodeUtilization
    deschedule:
      enabled:
      - RemovePodsHavingTooManyRestarts
      - RemovePodsViolatingNodeTaints
      - RemovePodsViolatingInterPodAntiAffinity
      - RemovePodsViolatingTopologySpreadConstraint

The thing is that I'm getting the following in the logs:

descheduler.go:193] "unable to create a profile" err="profile \"test\" configures deschedule extension point of non-existing plugins: map[RemovePodsViolatingTopologySpreadConstraint:{}]" profile="test"

full log of a pod:

I0530 13:04:01.063111       1 secure_serving.go:57] Forcing use of http/1.1 only
I0530 13:04:01.063368       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1717074241\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1717074240\" (2024-05-30 12:04:00 +0000 UTC to 2025-05-30 12:04:00 +0000 UTC (now=2024-05-30 13:04:01.063328327 +0000 UTC))"
I0530 13:04:01.063415       1 secure_serving.go:213] Serving securely on [::]:10258
I0530 13:04:01.063428       1 tracing.go:87] Did not find a trace collector endpoint defined. Switching to NoopTraceProvider
I0530 13:04:01.063481       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
W0530 13:04:01.072292       1 descheduler.go:247] failed to convert Descheduler minor version to float
I0530 13:04:01.088586       1 reflector.go:296] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160
I0530 13:04:01.088601       1 reflector.go:332] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:160
I0530 13:04:01.088706       1 reflector.go:296] Starting reflector *v1.PriorityClass (0s) from k8s.io/client-go/informers/factory.go:160
I0530 13:04:01.088768       1 reflector.go:332] Listing and watching *v1.PriorityClass from k8s.io/client-go/informers/factory.go:160
I0530 13:04:01.088934       1 reflector.go:296] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160
I0530 13:04:01.088947       1 reflector.go:332] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:160
I0530 13:04:01.089081       1 reflector.go:296] Starting reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160
I0530 13:04:01.089195       1 reflector.go:332] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:160
I0530 13:04:01.099044       1 reflector.go:359] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:160
I0530 13:04:01.099890       1 reflector.go:359] Caches populated for *v1.PriorityClass from k8s.io/client-go/informers/factory.go:160
I0530 13:04:01.100482       1 reflector.go:359] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160
I0530 13:04:01.151428       1 reflector.go:359] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:160
I0530 13:04:01.188750       1 descheduler.go:156] Building a pod evictor
E0530 13:04:01.188898       1 descheduler.go:193] "unable to create a profile" err="profile \"test\" configures deschedule extension point of non-existing plugins: map[RemovePodsViolatingTopologySpreadConstraint:{}]" profile="test"
I0530 13:04:01.188923       1 descheduler.go:170] "Number of evicted pods" totalEvicted=0
I0530 13:04:01.189096       1 reflector.go:302] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160
I0530 13:04:01.189205       1 reflector.go:302] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160
I0530 13:04:01.189310       1 reflector.go:302] Stopping reflector *v1.PriorityClass (0s) from k8s.io/client-go/informers/factory.go:160
I0530 13:04:01.189431       1 reflector.go:302] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160
I0530 13:04:01.189684       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
I0530 13:04:01.189699       1 secure_serving.go:258] Stopped listening on [::]:10258

But the RemovePodsViolatingTopologySpreadConstraint seems to be configured. What I'm missing here?

Thanks.

damemi commented 1 month ago

Hi @fkamaliada, you have RemovePodsViolatingTopologySpreadConstraint enabled under Balance and Deschedule, but it's only a Balance plugin. Try removing it from the Deschedule section of your config to see if that fixes it

fkamaliada commented 1 month ago

Thank you @damemi for your helpful reply. Seems that for the specific issue that was the problem. I also deleted from balance the plugin RemovePodsViolatingNodeTaints, and the error gone...

  plugins:
    balance:
      enabled:
      - RemoveDuplicates
      - RemovePodsViolatingTopologySpreadConstraint
      - LowNodeUtilization
    deschedule:
      enabled:
      - RemovePodsHavingTooManyRestarts
      - RemovePodsViolatingNodeTaints
      - RemovePodsViolatingInterPodAntiAffinity

Now in logs, there aren't something similar.

I0530 13:34:02.118912       1 pod_antiaffinity.go:93] "Processing node" node="ip-192-168-93-80.eu-west-1.compute.internal"
I0530 13:34:02.118994       1 profile.go:321] "Total number of pods evicted" extension point="Deschedule" evictedPods=0
I0530 13:34:02.119013       1 removeduplicates.go:107] "Processing node" node="ip-192-168-18-35.eu-west-1.compute.internal"
I0530 13:34:02.119150       1 removeduplicates.go:107] "Processing node" node="ip-192-168-72-233.eu-west-1.compute.internal"
I0530 13:34:02.119239       1 removeduplicates.go:107] "Processing node" node="ip-192-168-93-80.eu-west-1.compute.internal"
I0530 13:34:02.119317       1 profile.go:349] "Total number of pods evicted" extension point="Balance" evictedPods=0
I0530 13:34:02.119335       1 topologyspreadconstraint.go:122] Processing namespaces for topology spread constraints
I0530 13:34:02.119525       1 topologyspreadconstraint.go:221] "Skipping topology constraint because it is already balanced" constraint={"MaxSkew":1,"TopologyKey":"topology.kubernetes.io/zone","Selector":[{}],"NodeAffinityPolicy":"Honor","NodeTaintsPolicy":"Ignore","PodNodeAffinity":{},"PodTolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]}
I0530 13:34:02.119658       1 topologyspreadconstraint.go:221] "Skipping topology constraint because it is already balanced" constraint={"MaxSkew":1,"TopologyKey":"topology.kubernetes.io/zone","Selector":[{}],"NodeAffinityPolicy":"Honor","NodeTaintsPolicy":"Ignore","PodNodeAffinity":{},"PodTolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]}
I0530 13:34:02.119700       1 topologyspreadconstraint.go:221] "Skipping topology constraint because it is already balanced" constraint={"MaxSkew":1,"TopologyKey":"topology.kubernetes.io/zone","Selector":[{}],"NodeAffinityPolicy":"Honor","NodeTaintsPolicy":"Ignore","PodNodeAffinity":{},"PodTolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]}
I0530 13:34:02.119753       1 profile.go:349] "Total number of pods evicted" extension point="Balance" evictedPods=0
I0530 13:34:02.119857       1 nodeutilization.go:207] "Node is overutilized" node="ip-192-168-18-35.eu-west-1.compute.internal" usage={"cpu":"1560m","memory":"3086Mi","pods":"38"} usagePercentage={"cpu":80.83,"memory":43.61,"pods":34.55}
I0530 13:34:02.119920       1 nodeutilization.go:207] "Node is overutilized" node="ip-192-168-72-233.eu-west-1.compute.internal" usage={"cpu":"1700m","memory":"2982Mi","pods":"30"} usagePercentage={"cpu":88.08,"memory":42.14,"pods":27.27}
I0530 13:34:02.119933       1 nodeutilization.go:207] "Node is overutilized" node="ip-192-168-93-80.eu-west-1.compute.internal" usage={"cpu":"1090m","memory":"1312Mi","pods":"15"} usagePercentage={"cpu":56.48,"memory":18.54,"pods":13.64}
I0530 13:34:02.119947       1 lownodeutilization.go:135] "Criteria for a node under utilization" CPU=10 Mem=20 Pods=10
I0530 13:34:02.119962       1 lownodeutilization.go:136] "Number of underutilized nodes" totalNumber=0
I0530 13:34:02.119972       1 lownodeutilization.go:149] "Criteria for a node above target utilization" CPU=12 Mem=60 Pods=15
I0530 13:34:02.119981       1 lownodeutilization.go:150] "Number of overutilized nodes" totalNumber=3
I0530 13:34:02.119990       1 lownodeutilization.go:153] "No node is underutilized, nothing to do here, you might tune your thresholds further"
I0530 13:34:02.120003       1 profile.go:349] "Total number of pods evicted" extension point="Balance" evictedPods=0
I0530 13:34:02.120016       1 descheduler.go:170] "Number of evicted pods" totalEvicted=0
I0530 13:34:02.120232       1 reflector.go:302] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160
I0530 13:34:02.120377       1 reflector.go:302] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160
I0530 13:34:02.120498       1 reflector.go:302] Stopping reflector *v1.PriorityClass (0s) from k8s.io/client-go/informers/factory.go:160
I0530 13:34:02.120591       1 reflector.go:302] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160
I0530 13:34:02.120854       1 secure_serving.go:258] Stopped listening on [::]:10258
I0530 13:34:02.120907       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"

But, in the end it seems to do nothing about the balancing. Still the same node behavior as before. If someone has any idea on what prevents the balancing, please let me know. I'll keep trying anyway.

Screenshot 2024-05-30 164154

damemi commented 1 month ago

@fkamaliada it looks like you are trying to balance based on LowNodeUtilization? If so, see this line in the logs:

I0530 13:34:02.119990       1 lownodeutilization.go:153] "No node is underutilized, nothing to do here, you might tune your thresholds further"

This means there aren't any nodes that fall under the thresholds for all 3 resource types (cpu, memory, pods). LowNodeUtilization will only evict pods from over-utilized nodes if there is a matching under-utilized node for the new pods to be scheduled onto. You can try adjusting your threshold settings to get the balance you want. Please see the LowNodeUtilization docs for more details about how this works.

fkamaliada commented 1 month ago

Thanks again @damemi

You're right. My mistake was that I thought that the numbers for pods, was count type, but they are percentage of (current pods / max pods capability of node).

Also cpu usage or memory usage is % of reserved measures and not used. So I was seen quite little cpu usage (5,5%) but the descheduler was pointing to about 60% cpu for a node, and that was the reserved cpu.

Now, I'll have to find the optimal values.

Thank you very much!