Is your feature request related to a problem? Please describe.
I typically try to schedule multiple replicas of pods across worker nodes. This helps reduce impact if a node is lost. I typically use preferred so that if I do lose a node, I will maintain my desired replica count. If a node is lost and no more are available, Kube will reschedule the pod on a node that already has one. When a new node is added back to the cluster, the descheduler does not evict one of the 2 pods on a single node so it can be rescheduled back to the new 3rd node.
Describe the solution you'd like
My deployment spreads 3 pods across 3 nodes. I have the above preferred anti-affinity configuration. When a node is lost, a new pod is created on one of the 2 remaining nodes (preferred). When a 3rd node is reintroduced into the cluster, I want the descheduler to evict one of the 2 pods running on a single node so it can be rescheduled onto the new 3rd node.
Describe alternatives you've considered
Manually killing the pod.
What version of descheduler are you using?
descheduler version: 0.29
Is your feature request related to a problem? Please describe. I typically try to schedule multiple replicas of pods across worker nodes. This helps reduce impact if a node is lost. I typically use preferred so that if I do lose a node, I will maintain my desired replica count. If a node is lost and no more are available, Kube will reschedule the pod on a node that already has one. When a new node is added back to the cluster, the descheduler does not evict one of the 2 pods on a single node so it can be rescheduled back to the new 3rd node.
Describe the solution you'd like My deployment spreads 3 pods across 3 nodes. I have the above preferred anti-affinity configuration. When a node is lost, a new pod is created on one of the 2 remaining nodes (preferred). When a 3rd node is reintroduced into the cluster, I want the descheduler to evict one of the 2 pods running on a single node so it can be rescheduled onto the new 3rd node.
Describe alternatives you've considered Manually killing the pod.
What version of descheduler are you using? descheduler version: 0.29
Additional context