gianlucam76 / k8s-cleaner

Cleaner is a Kubernetes controller that identifies unused or unhealthy resources, helping you maintain a streamlined and efficient Kubernetes cluster. It provides flexible scheduling, label filtering, Lua-based selection criteria, resource removal or update and notifications via Slack, Webex and Discord. it can also automate clusters operations.
https://projectsveltos.github.io/sveltos/
Apache License 2.0
319 stars 20 forks source link

Add topolgySpreadConstraints support to helm #139

Closed aminmr closed 1 month ago

aminmr commented 1 month ago

Add topologySpreadConstraints to Helm Charts

This PR introduces topologySpreadConstraints to the Helm charts to improve availability and fault tolerance in multi-zone Kubernetes clusters. The changes ensure that pod replicas are evenly distributed across zones using the topology.kubernetes.io/zone label, minimizing the risk of zone-specific failures.

Key Changes:

Topology Spread Constraints: Evenly distributes replicas across zones with a configurable maxSkew. These changes are optional and configurable via values.yaml, providing better fault isolation without breaking existing deployments.

gianlucam76 commented 1 month ago

Thank you. We can go ahead and merge this. But to take effect sharding needs to be implemented. Otherwise we cannot have multiple replicas of k8s-cleaner (all instances will processes all cleaner instances)

aminmr commented 1 month ago

You're welcome @gianlucam76

Yes, the problem is precisely what you mention. In my scenario, I have a multi-zone cluster. If I lose one zone, I want the cleaner to work without any disruption. So, I increased the replicas and used affinity and topologySpreadConstraints. It's working fine now(I know it depends on the instance and could cause problems for some cleaner instances). So, I opened this PR and hope it works for someone else until the implementation of sharding or leader-election.

Thanks!