Open ttony opened 3 years ago
@ttony, the controller currently assumes exclusive ownership for the target groups. Target groups should not be shared across multiple controllers to avoid any race condition.
/kind feature Here is the outline
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/assign
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
We also find this feature very useful in some scenarios:
We also found similar products in Google Cloud: https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-multi-cluster-gateways
waiting for further progress of this issue.
/remove-lifecycle rotten
Came looking for this. Migration from one cluster to many is the use case.
/kind feature Here is the outline
* maintain a configmap for each tgb * add all targets to be registered to the configmap * when deleting targets, limit to the entries from the configmap This will enable multiple controllers to share the target groups.
Maybe I’m saying something stupid, but couldn’t this information be recorded in the TargetGroupBinding status?
@kishorj @oliviassss Hey we are really looking for a feature like this. Can you please provide an update on the same or any progress whatsoever would be really helpful? Thanks
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
I think this would be a very useful feature. Even a coarse-grained solution (e.g attaching specific AZ or subnet/CIDR to a cluster) will cover a lot of user scenarios and easy to implement (high efficient) IMHO.
According to the documentation, the TargetGroupBinding CDR can be used for the UseCase: Externally Managed Load Balancer.
In our current scenario, LoadBalancer, Listener and TargetGroup were created manually. The TargetGroup is of Type instance
and contains 1 healthy EC2 instance.
When creating the TargetGroupBinding
inside our EKS cluster, we experience the exact same behaviour, the AWS LoadBalancer Controller registers the worker nodes from the cluster (as expected), but also drains/deregisters the aforementioned EC2 instance.
As far as I've understood the documentation, this should be the exact use-case mentioned but the discussion within this issue gives me a different feeling. 🤔 I would be really thankful for help or feedback.
+1 to this feature request.
This would significantly simplify managing services across multiple k8s clusters. They can register targets in the same target group and allow for a simple way to smoothly route traffic between several clusters.
The challenge for the controller is to know when to de-register the target when it's no longer present in the k8s cluster.
Maybe I’m saying something stupid, but couldn’t this information be recorded in the TargetGroupBinding status? This seems like a straight-forward path.
Another option is to allow storing that state is something like dynamodb (given that this is AWS ALB controller) and require that a dynamo table is provided if you want multi-cluster support.
There's an exciting note on https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2757
There is a feature request https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2173 to support sharing tgb across multiple cluster/controllers. It is in our roadmap for the next minor release.
@kishorj is this still planned? Do you know roughly when?
You can follow this in the meantime: https://aws.amazon.com/blogs/aws/new-application-load-balancer-simplifies-deployment-with-weighted-target-groups/
@SunitAccelup unless I’m mistaken, weighted target groups aren’t available on network load balancers.
Just wanted to add my voice to this need. Currently, we can only do canary deploys with an external ALB because that is the only thing that supports weighting of groups.
I was hoping that we could set TargetGroupBinding settings in a way that said something like "shared" on the spec and then the AWS Load Balancer Controller would only track the instance ids or ips that it originally registered
We definitely need this, would love to see this feature get implemented.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
+1: Highly useful
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Encountered this same problem myself today when trying to split traffic between two clusters. Would be extremely helpful if the controller could support it.
+1 This feature will be useful to us.
I followed the discussions here and in the PRs that @Alex-Waring created. Adding details about my use case, with the hope that it will help further refine the solution.
We are looking at implementing blue-green deployments of a certain service that is sitting behind an NLB. The TargetType used is ip
. The service is a StatefulSet and we essentially need the capability to switch its traffic to another StatefulSet. Both will be hosted in the same EKS cluster.
The service in question is non-HTTP so ALB's weighted routing will not work.
Instead of multi-cluster support, I am looking for a solution that supports a use-case with a single Kubernetes cluster, a single AWS Load Balancer Controller, but two different TargetGroupBinding objects for each of the StatefulSets.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Describe the bug TargetGroupBinding allow to specify TG arn. I am using targetGroupBinding twice on both cluster to see if targets (pods) show up from both clusters in the targetGroup.
Unfortunately, it is not. it is a race condition happening between both cluster. The result is only 1 cluster be able to binding to targetgroup at a particular time.
Expected outcome I am thinking that the problem above should be show all the targets on both clusters
Environment
Additional Context: