kubernetes-sigs / aws-load-balancer-controller

A Kubernetes controller for Elastic Load Balancers
https://kubernetes-sigs.github.io/aws-load-balancer-controller/
Apache License 2.0
3.83k stars 1.41k forks source link

AWS Load Balancer Controller to register new pods first and then deregister old pods #3621

Open dilipkraghupatruni opened 3 months ago

dilipkraghupatruni commented 3 months ago

Is your feature request related to a problem? We are using Argo rollouts for application deployments to EKS.

During blue-green switch, we are noticing issues related to AWS Load Balancer Controller when used with NLB. NLB takes lot more time to register new targets compared to ALB. Because of the current logic in aws load balancer controller, the old pods are deregistered first and then load balancer controller tries to register the new pods. However, the lag between these two steps is significant enough with NLB to cause app impact. The order of the operation is same with ALB but ALB does the operation very fast compared to NLB and hence we dont notice any impact.

Describe the solution you'd like We would like to get a feature toggle/ config in load balancer controller to swap the operations which means we want to register new pods first and then deregister old pods. https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/main/pkg/targetgroupbinding/resource_manager.go#L143-L152 And also configure the amount of time we want to wait between the operations and also max amount of time we want to wait on deregistration.

Describe alternatives you've considered We have implemented canary deployment strategy instead of blue green and that helped us reduce the impact. But as part of the debugging, we figured out the problem is not specific to NLB. We deal with thousands of applications where majority use blue-green deployment strategy. hence we cannot implement canary for everyone. We would like to fix the problem at its core which is inside aws load balancer controller.

jwenz723 commented 3 months ago

NLB takes lot more time to register new targets compared to ALB. Because of the current logic in aws load balancer controller, the old pods are deregistered first and then load balancer controller tries to register the new pods. However, the lag between these two steps is significant enough with NLB to cause app impact. The order of the operation is same with ALB but ALB does the operation very fast compared to NLB and hence we dont notice any impact.

Just wanted to clarify that the amount of time between when deregistration executes and when registration executes is negligible. The deregistration and registration events are essentially occurring at the same time (probably just a few milliseconds between these events).

The problem is that the old targets/pods enter a deregistering state in the NLB Target Group at the same time as when the new targets/pods enter an initial state in the NLB Target Group and NLB Target Group registration takes between 3 to 5 minutes (see this comment). This means that for 3 to 5 minutes there are 0 healthy targets in the NLB Target Group.

Regarding ALBs, I've observed during my testing that ALB Target Group registration typically takes between 10-15 seconds. So during a blue-green deploy with an ALB there is a 10-15 second period of time when there are 0 healthy targets in the ALB Target Group.

Having a quantity of time to wait after target registration and before target deregistration as mentioned by @dilipkraghupatruni should help to maintain healthy targets in the NLB/ALB Target Group during a blue-green flip.

k8s-triage-robot commented 1 week ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

sara-reddy-cb commented 1 week ago

/remove-lifecycle stale