Open ArtunSubasi opened 3 years ago
/kind feature duplicate issues
Similar issue
We are targeting this support in the v2.3.0 release.
Could you elaborate bit more on the following requirement?
It should be possible to pass the ARN(s) or name(s) of an existing load balancer(s) to the aws-load-balancer-controller so that the aws-load-balancer-controller can start managing the existing load balancer instead of creating new ones.
would it not suffice to specify the LB ARN via annotation for the specific resources?
As for the desired state, simply uninstalling the aws-load-balancer-controller should not delete the target groups. The deletion of target group has to be tied to the underlying ingress/service resources.
Sure. We are going for full automation with the GitOps concept combining Terraform and Flux. Ideally, Terraform would provision our K8s cluster and install Flux into the cluster. Then Flux would kick in, fetch all Kubernetes resource manifests from a git config repository and fill the cluster with life. Our Kubernetes resource manifests in the git repository cannot contain references to AWS resources via ARNs because the ARNs get generated during Terraform provisioning. Therefore, we decided to let Terraform install the ALB controller using a Helm Provider because Terraform can then pass the cluster name, VPC ID and ideally also the ARN of the load balancer to use.
If it is easier to implement this approach using annotations, it would suffice if we are able to reference the name of the load balancer instead of the ARN. Putting ARNs in Kubernetes manifest limits automation possibilities since ARNs get generated. The name can be used to sync the the ALB with static Kubernetes manifests since a name does not have generated parts.
As for uninstalling the aws-load-balancer-controller, it is not an hard requirement for me. I just assumed, since the aws-load-balancer-controller creates the target groups in the beginning, it should also delete them. Otherwise, this may leave orphan AWS resources which are not managed. Take the scenario:
Hi, sorry to interject but I'm very interested in this feature because it would suit my usecase perfectly. My goal is to create a single load balancer that will route traffic to two clusters based on the hostname. That means I don't want it created within kubernetes of any cluster but I'd like it managed. I'm fairly new regarding ALBs and this controller and when I bumped into the same link listed above, the TargetGroupBinding custom resource got my hopes up. However I'm little confused on how to set this up - Reading through your posts, it seems to confirm the impression that I have to define the target groups and all the routing details in Terraform and just use the TargetGroupBinding to "attach" to the ALB target group which is not ideal... Also, I think there is too little of documentation for this use-case. Did I understand everything correctly? When can we expect this feature of having the controller create and manage targetgroups on an existing ALB?
Thanks in advance!
Hey, @Erokos! I'm currently trying to solve the exact same issue for myself and I'm wondering if you managed to find an elegant solution already.
Hi, I have done it in a way where I use terraform to create the ALB, its listeners and listener rules. After I deploy the aws loadbalancer controller, just define a targetgroupbinding in which I reference the target group arn defined in the terraform code. This works even thouugh it's not ideal because I'm not using ingress in which the routing is defined, rather it's deifned in terraform and every time you add another app, i.e. endpoint, you need to write the listener rule, route53 record and so on in terraform.
any ETA for v2.3.0?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
This seems to be addressed by group.name. If so, this ticket can be closed.
@FernandoMiguel No. group.name
can only use an ALB that was created by the cluster. And that ALB will be deleted if all ingresses in the same group are deleted.
What this issue is asking for is the ability to use an ALB that was created outside the cluster.
I am still stumped if this is possible with the latest released version (2.4.1) of the controller. The documentation for TargetGroupBinding states "This will allow you to provision the load balancer infrastructure completely outside of Kubernetes but still manage the targets with Kubernetes Service." I still am unable to understand how to make this work. I've provisioned an ALB and TargetGroup via terraform. I cannot get the LB controller to automatically create the listener rules based on my ingress definition. Should this be possible with the ingress annotations? If so, are there any examples?
I also have same interest in this feature to automate alb related things using terraform(waf, route53.. and so on)
See this comment
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
This could potentially be addressed with #3146
. After I deploy the aws loadbalancer controller, just define a targetgroupbinding in which I reference the target group arn defined in the terraform code.
@Erokos can you explain how are you passing TG arn to targetgroupbinding CRD definition? I am passing thru that problem now.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
Problem description
If a workflow requires the provisioning of a load balancer outside of Kubernetes, the only way of doing that seems to be the usage of
TargetGroupBinding
custom resources (source). That also implies that the target groups must be managed outside of Kubernetes. Creating the load balancer beforehand and letting theaws-load-balancer-controller
manage the target groups doesn't seem to be possible.Desired state
aws-load-balancer-controller
so that theaws-load-balancer-controller
can start managing the existing load balancer instead of creating new ones.alb.ingress.kubernetes.io/group.name
.aws-load-balancer-controller
should be able to dynamically create and manage the target groups and bind it to the existing load balancer.aws-load-balancer-controller
should delete the target groups, but not the existing load balancer.Further Motivation
In a workflow where we automate everything without any manual intervention, the load balancers created by the
aws-load-balancer-controller
are hard to register with DNS entries in an automated way. The only plausible way of binding Route53 DNS entries to an automatically created load balancer seems to be the usage of external-dns. However, external dns is too much for simple cases because it increases monthly costs, maintenance costs as well as the required permissions in the cluster. If a load balancer can be provisioned beforehand, the Route53 entries can be automated easily without the need for external dns. Theaws-load-balancer-controller
can still help with the heavy lifting of managing target groups, registring and deregistring new pods according to ingress resources.