Closed nalshamaajc closed 8 months ago
It shoud be set at DefaultEvictor level. Would you try the following?
profiles:
- name: pod-ttl
pluginConfig:
- name: DefaultEvictor
args:
evictLocalStoragePods: true
- args:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- kj-test-deploy
maxPodLifeTimeSeconds: 120
namespaces:
include:
- default
name: PodLifeTime
This worked after some modifications thanks, unfortunately the docs are not clear enough on this part.
@nalshamaajc would you be open to creating a PR to make the docs better? ref: https://github.com/kubernetes-sigs/descheduler#example-policy
https://github.com/kubernetes-sigs/descheduler/blob/73eb42467a3dc8f8c6aebf06acf93438a4dd57c6/charts/descheduler/values.yaml#L85
Hello, I'm using descheduler to manage Pod Time To Live. I'm deploying it as a kind: deployment using the helm chart. I checked the clusterrole and it's binding to the service account. I also tested that the service account can do what it needs to do using the kubectl auth can-i ... command and it was all working fine. The below snippet of the configmap works
When I change the namespace, and label key and value nothing happens as if the rule is not matching any pods. Below is a snippet of the changes
You can also see that the resource should match the conditions in terms of time being > than 120 seconds and the namespaces and labels also match.
Below is the snippet of the helmchart values file that I'm using and is also not being successful (the values are not exact but the structure is).
I increased the logging debug verbosity and I got the below error (values were changed
So I added the parameter and it looks like the below in the configmap