Closed aminmr closed 1 month ago
Thank you. We can go ahead and merge this. But to take effect sharding needs to be implemented. Otherwise we cannot have multiple replicas of k8s-cleaner (all instances will processes all cleaner instances)
You're welcome @gianlucam76
Yes, the problem is precisely what you mention. In my scenario, I have a multi-zone cluster. If I lose one zone, I want the cleaner to work without any disruption. So, I increased the replicas and used affinity and topologySpreadConstraints. It's working fine now(I know it depends on the instance and could cause problems for some cleaner instances). So, I opened this PR and hope it works for someone else until the implementation of sharding or leader-election.
Thanks!
Add topologySpreadConstraints to Helm Charts
This PR introduces
topologySpreadConstraints
to the Helm charts to improve availability and fault tolerance in multi-zone Kubernetes clusters. The changes ensure that pod replicas are evenly distributed across zones using the topology.kubernetes.io/zone label, minimizing the risk of zone-specific failures.Key Changes:
Topology Spread Constraints: Evenly distributes replicas across zones with a configurable maxSkew. These changes are optional and configurable via
values.yaml
, providing better fault isolation without breaking existing deployments.