kubernetes-sigs / aws-load-balancer-controller

A Kubernetes controller for Elastic Load Balancers
https://kubernetes-sigs.github.io/aws-load-balancer-controller/
Apache License 2.0
3.9k stars 1.45k forks source link

Use an existing ALB #228

Open countergram opened 7 years ago

countergram commented 7 years ago

As a user of Terraform (or, substitute CloudFormation), I would like to use an existing ALB with the ingress controller so that I can keep my infrastructure automation centralized rather than in several different places. This also externalizes the various current and future associations between ALB and other parts of the infrastructure that may already be defined in TF/CFN (certs, Route53, WAF, CloudFront, other config).

sichiba commented 1 year ago

Hi there,

are there any news regarding this use case? is it available??

mdiez-modus commented 1 year ago

Hi @mfinkelstine and @sichiba

Yes, it's already available.

Here is a presentation on how I solved the problem and sample code for you to use on that regard: https://github.com/marcosdiez/presentations/tree/master/2022-10-21-k8s-aws-alb-terraform-no-helm

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

marcosdiez commented 1 year ago

/remove-lifecycle stale

blakepettersson commented 1 year ago

This could potentially be addressed with #3146

kahirokunn commented 1 year ago

I need this feature.

shiyuhang0 commented 1 year ago

Does NLB be included in this proposal?

sichiba commented 1 year ago

for those of you looking to use the same ALB to mutualise different ingresses. you can acheive it by adding this annotation. alb.ingress.kubernetes.io/group.name: xxxxx add this annotation to every ingress you want to append to the same ALB

here's an example of the ingress manifest

kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/certificate-arn: {{ .Values.networking.ingress.certificate }} alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/group.name: xxxxxx finalizers:

k8s-triage-robot commented 8 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

ffMathy commented 8 months ago

/remove-lifecycle stale

MIJOTHY-V2 commented 6 months ago

Also interested in this. Our use-case is that we'd like to make use of APIGateway through terraform, but HTTP APIs require a listener ARN to be supplied. Datasource lookups can lead to a lot of pain with "known at apply" forced recreations of e.g. VPC links. Hence we're creating a skeleton ALB and listeners through terraform, then handing off post-creation management of the ALB + listeners to the lb-controller. We'd also prefer deletion of the ingress resource to not cause deletion of the ALB + listeners, for the sake of clean terraform deletions (though that's not such a big deal as an out-of-band deletion can be wrangled, and preventing deletion can lead to issues with e.g. finalisers).

A setup we are currently trialling to work around the lack of first-class support is as follows:

This seems like it allows the ALB and listener(s) to be adopted by the aws-load-balancer-controller for the relevant ingress, and for all the resources to be updated but not deleted. So we are able to make use of the ALB resources in terraform without needing to rely on apply-time k8s datasource lookups (which has caused us a lot of pain). It feels a bit brittle in that it's depending on what could be seen as implementation details of the aws-load-balancer-controller. It would obviously be preferable to have this functionality be supported by the controller itself. But any sort of feedback on this approach would also be good to hear. We may be barking up the wrong tree by trying to have a terraformed APIGateway integrate with a k8s-managed ALB.

koleror commented 4 months ago

Any update on this? I'm also interested in precreating the ALB using terraform (and leave the controller handle the target groups - or fill them). My use case is that I'd like to put a cloudfront in front of 2 ALBs to do some path based routing (can't do it in ALB unfortunately as I need to restrict some routes to prefix list, which can only be done at security group level).

dwickr commented 4 months ago

@koleror have you looked into using the TargetGroupBinding resource? It allows you to create the ALB and TG in Terraform and then have the AWS LB Controller register your nodes w/ the TG.

koleror commented 4 months ago

@koleror have you looked into using the TargetGroupBinding resource? It allows you to create the ALB and TG in Terraform and then have the AWS LB Controller register your nodes w/ the TG.

Will give it a try, thanks!

Matthieulvt commented 3 months ago

I'm currently facing this issue since I've created an alb with target group in terraform and once I deploy the alb controller, it will create another alb.

Based on the documentation I can see that

The ALB for an IngressGroup is found by searching for an AWS tag ingress.k8s.aws/stack tag with the name of the IngressGroup as its value. For an implicit IngressGroup, the value is namespace/ingressname.

When the groupName of an IngressGroup for an Ingress is changed, the Ingress will be moved to a new IngressGroup and be supported by the ALB for the new IngressGroup. If the ALB for the new IngressGroup doesn't exist, a new ALB will be created.

If an IngressGroup no longer contains any Ingresses, the ALB for that IngressGroup will be deleted and any deletion protection of that ALB will be ignored.

Which can explain that the ALB controller first try to check if any ALB exist for the IngressClass specified through the ALB Tags, and If does not exist, a new ALB will be create. So In order to specify an ALB to the ALB controller, you need to update the tags in place in your terraform ALB creation (or elsewhere based on your infra management)

In my case I've noticed that I forgot to specify tags on my main ALB and the newly ALB created got the following tags :

elbv2.k8s.aws/cluster | prod-eks
ingress.k8s.aws/resource | LoadBalancer
ingress.k8s.aws/stack | <Namespace_Kube>/<Ingress_Name>

You can check which tags you need by letting the ALB controller create a new one to figure which tags you need on your main ALB.

After I changed the tags, I got my ALB controller working with my specified ALB. I'm also using the targetGroupBinding so it's using both my listener http and https on my ALB and update them when needed.

ascopes commented 3 months ago

Is there a workaround with TargetGroupBinding if you need to manage an entire ingress resource with an existing load balancer (so that the concern of routing can be kept within Kubernetes itself rather than needing IaC modification for each new target service)?

hlascelles commented 3 months ago

You can set up the ALB up to point at port 80 in the cluster which is handled by traefik on any box. Thus all Ingress management is all in cluster, no IaC changes for new services. eg in CDK:

    const targetGroup = new ApplicationTargetGroup(this, "ClustersAlbTargetGroup", {
      vpc: vpc,
      targetGroupName: `ClustersAlbTG`,
      port: 80,
      protocol: ApplicationProtocol.HTTP,
      targetType: TargetType.IP,
      targets: [],
      // Test the Traefik ping endpoint
      healthCheck: {
        port: "9000",
        path: "/ping"
      }
    });

Of course you will have to do more work to get ALB>cluster comms over SSL.

nethershaw commented 2 months ago

for those of you looking to use the same ALB to mutualise different ingresses. you can acheive it by adding this annotation. alb.ingress.kubernetes.io/group.name: xxxxx add this annotation to every ingress you want to append to the same ALB

here's an example of the ingress manifest

kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/certificate-arn: {{ .Values.networking.ingress.certificate }} alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/group.name: xxxxxx finalizers:

  • ingress.k8s.aws/resources name: {{ .Values.appName }} namespace: {{ .Values.namespace }} spec: ingressClassName: alb

This does nothing of the kind if the controllers that would reference the common ALB are on separate Kubernetes clusters, which is exactly the scenario where it would be useful for migrating workloads.

seifrajhi commented 1 day ago

+1 This is still relevant, I hope to see this feature implemented soon

hlascelles commented 1 day ago

AWS have now published a post that shows how to do much of the deployment I was describing earlier: https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/228#issuecomment-2211076818

In their post they do talk about using a AWS Load Balancer Controller, however we do not use it (as we do not need/want the cluster to create the ALB outside of infra-as-code), and instead we get the traffic picked up by traefik in-cluster.

https://aws.amazon.com/blogs/containers/patterns-for-targetgroupbinding-with-aws-load-balancer-controller/

when_ingress_not_enough_what_is_tgb

This does all work, and fulfils the goal of this issue, but it is unnecessarily difficult... It would be good to have a one (or low number) of lines config to get this working. I feel a cluster should not be able to create infrastructure, including ALBs.