Open markdingram opened 1 year ago
I have similar case, tbh I wanted to limit descheduler scope to just nodes with certain label, but with this behaviour this is not possible. I am planning to use descheduler with TaintBased
eviction policy (evict pods not matching node taints) but additionally I have companion operator which sets certain label to the node.
I have work aoruneded that by adding node selector in policy directly, however this is suboptimal - I have over 1000 nodes in my cluster and I actually want descheduler to watch just few of them
I also found https://github.com/kubernetes-sigs/descheduler/issues/469 however not sure what was fixed in that issue
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
I would say that this is more of a bug, as this affects all single node kubernetes clusters (minikube, docker desktop)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Is your feature request related to a problem? Please describe.
When there are 0 or 1 nodes the descheduler loop returns error
the cluster size is 0 or 1
& the process exits.This doesn't play nicely when the descheduler is running as a Deployment - the pod goes into
CrashLoopBackOff
due to the repeated early exits.Describe the solution you'd like
When the descheduler is running as a Deployment
the cluster size is 0 or 1
check shouldn't exit the process. The process should remain running until the next iteration.Something like (in
runDeschedulerLoop
):Describe alternatives you've considered
What version of descheduler are you using?
descheduler version:
0.28.0
Additional context
Example logs