Open ebensom opened 4 months ago
@kyma-project/framefrog @tobiscr could you please prioritize this issue?
WE are currently completely blocked with replacing our Provisioner by KIM. But it's in our backlog and we will pick it up in the coming months (currently it's on position No 11 of our backlog).
Example for Anti-Affinity: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#an-example-of-a-pod-that-uses-pod-affinity
Example for TopologySpreadConstraint: https://github.com/kyma-project/infrastructure-manager/issues/364#issuecomment-2331201265
Description
During chaos testing run, which simulated pod failures (kubelet) per AZ on all nodes belonging to the specific AZ, it was pointed out that critical application-connector workloads were all traniently down, either in pending state or in init state. The reason is that to all replicas were being scheduled in nodes belonging to the same AZ, and kept being do so during pod termination/eviction.
For both central-application-connectivity-validator and central-application-gateway running on enterprise plan runtimes (having at least 3 nodes in 2 AZs), ensure that the pods are configured with either topology spread constraints, or pod anti affinity rules to preferably
Furthermore ensure that Pod Disruption Budget is configured for both workloads requiring minAvailable: 1 or
maxUnavailable: 1
to ensure that during controlled eviction / maintenance, there is always at least one replica is ready to serve the traffic.Reasons
Higher resiliency and availiability during node level and zones level failure scenarios.
Attachments