Closed bprieur closed 1 week ago
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
What version of descheduler are you using?
descheduler version: v0.29.0
Does this issue reproduce with the latest release?
Yes.
Which descheduler CLI options are you using?
Defaults from chart helm release.
Please provide a copy of your descheduler policy config file
From chart helm release, enable only
RemovePodsViolatingNodeAffinity
.Values applies with helm installation.
```yaml deschedulerPolicy: strategies: RemoveDuplicates: enabled: false RemovePodsHavingTooManyRestarts: enabled: false RemovePodsViolatingNodeTaints: enabled: false RemovePodsViolatingNodeAffinity: enabled: true params: nodeAffinityType: - requiredDuringSchedulingIgnoredDuringExecution - preferredDuringSchedulingIgnoredDuringExecution RemovePodsViolatingInterPodAntiAffinity: enabled: false RemovePodsViolatingTopologySpreadConstraint: enabled: false LowNodeUtilization: enabled: false ```What k8s version are you using (
kubectl version
)?kubectl version
OutputWhat did you do?
Create a Deployment.
```yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: disktype operator: In values: - ssd ```Labeled one node in the cluster
`kubectl label nodesUnlabeled the node
`kubectl label nodesWhat did you expect to see?
Node are in pending status because any node in the cluster fit the affinity.
What did you see instead?
Pods are in running status.
With verbosity 4, descheduler log is
node.go:166] "Pod does not fit on node" pod:="default/nginx-deployment-56f548b646-jmcqv" node:="pi-101" error:="pod node selector does not match the node label"
for each node.The strategy
RemovePodsHavingTooManyRestarts
deletes pods when the parameterpodRestartThreshold
is reached.Maybe this kind of scenario isn't covered by the "RemovePodsViolatingNodeAffinity" strategy, or maybe it's deliberate?