kubernetes-sigs / aws-load-balancer-controller

A Kubernetes controller for Elastic Load Balancers
https://kubernetes-sigs.github.io/aws-load-balancer-controller/
Apache License 2.0
3.93k stars 1.46k forks source link

Create option to reuse an existing ALB instead of creating a new ALB per Ingress #298

Closed julianvmodesto closed 3 years ago

julianvmodesto commented 6 years ago

I read in this comment https://github.com/coreos/alb-ingress-controller/issues/85#issuecomment-299504058 that host-based routing was released for AWS ALBs shortly after ALB Ingress Controller was released.

It would be pretty cool to have an option to reuse an ALB for an Ingress via annotation -- i'd be interested in contributing towards this, but I'm not sure what's needed to make this feasible.

pperzyna commented 6 years ago

@bigkraig Any update?

mwelch-ptc commented 6 years ago

Wait... I guess I missed this in reading the documentation. Are you saying that every Ingress created deploys it's own ALB? So for our 60 or so ingresses we'd end up with 60 ALB's? What about different host names within the same ingress? Does that at least reuse the same ALB?

patrickf55places commented 6 years ago

@mwelch-ptc That is correct. There is a 1-to-1 mapping of Ingress resources to ALBs, even if host names are the same.

kurtdavis commented 6 years ago

Seems to be fairly costly. We have looked at other solutions due to this issue.

bigkraig commented 6 years ago

What is everyones thoughts on how to prioritize the rules if a single ALB spans ingress resources and potentially event namespaces? I can see where in larger clusters multiple teams may accidentally take the same path.

ghost commented 6 years ago

What is everyones thoughts on how to prioritize the rules if a single ALB spans ingress resources and potentially event namespaces? I can see where in larger clusters multiple teams may accidentally take the same path.

This is a general Kubernetes ingress issue, not specific to this ingress controller. I think the discussion of this should be had in a more general forum instead of an issue against this controller.

whithajess commented 6 years ago

Im tempted to say that this is not a general Kubernetes ingress issue.

Existing load balancers supported by Kubernetes are Layer 4 - And are supported by Ingress Controllers that do the Layer 7 (This means they can use 1 load balancer and then deal with layer 7 when it gets into the cluster)

ALB is Layer 7 and is dealing with it before it gets to Kubernetes, so we cannot assume they are going to change for this use case.

As this becomes more standard i think this could change GCE suggests "If you are exposing an HTTP(S) service hosted on Kubernetes Engine, HTTP(S) load balancing is the recommended method for load balancing." and I would imagine as EKS kicks off it will suggest the same.

spacez320 commented 6 years ago

We can already generally do this by having a singular ingress resource, although it makes whatever deployment scheme you're using for Kubernetes have to adjust to that. It's also worth pointing out that in Kubernetes Ingress documentation, it literally states:

An Ingress allows you to keep the number of loadbalancers down to a minimum.

I think it would be really nice to have the ability to do this in a nice way.

bigkraig commented 6 years ago

@spacez320 I read that as that you can have an ingress with multiple services behind it, so a single load balancer for many services as opposed to a load balancer per service.

There is still the issue that the IngressBackend type does not have a way to reference a service in another namespace. I think until the ingress resource spec is changed, there isn't a correct way of implementing something like this.

patrickf55places commented 6 years ago

@bigkraig I don't think this issue should be closed. The issue isn't about having a single Ingress resource that can span multiple namespaces. It is about having multiple Ingress resources (possible across different namespaces, but not necessarily) that all use the same AWS application load balancer.

bigkraig commented 6 years ago

@patrickf55places got it, within the namespace is possible with the spec but I am still unsure how we would organize the routes or resolve conflicts.

spacez320 commented 6 years ago

@bigkraig Well, I think it's both, and I think that's what @patrickf55places meant by saying "possible across different namespaces, but not necessarily". We should be able to define an Ingress anywhere and share an Amazon load balancer, I think.

I understand if there's limitations in the spec, though. Should someone go out and try to raise this issue with the wider community? Is that possibly already happening?

natefox commented 6 years ago

What about using something similar to how nginx ingress handles it? https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/mergeable-ingress-types

Multiple minions can be applied per master as long as they do not have conflicting paths. If a conflicting path is present then the path defined on the oldest minion will be used.

joegoggins commented 6 years ago

I was glad to find this GitHub issue and also bummed that it seems like it will be a long time before this will get implemented. It smells like there is a lot of complexity associated with the change and potentially not resources to dig into it. I'm assuming it will be many months and thus our engineering team is going to switch our technical approach to use a different load balancing ingress strategy with AWS costs that scale economically in-line with our needs. If that assessment feels wrong, please let me know.

jakubkulhan commented 6 years ago

I've created another ingress controller that combines multiple ingress resources into a new one => https://github.com/jakubkulhan/ingress-merge

Partial ingresses are annotated with kubernetes.io/ingress.class: merge, merge ingress controller processes them and outputs a new ingress annotated with kubernetes.io/ingress.class: alb, then ALB ingress controller takes over and creates single AWS load balancer.

marcosdiez commented 5 years ago

Hey, I might have solved your problem in this PR: https://github.com/kubernetes-sigs/aws-alb-ingress-controller/pull/830 . Testing and feedback is welcome :)

kainlite commented 5 years ago

I'm currently using ingress-merge and while it works I'm having issues with the health checks as the services that I'm exposing do different things by default and we don't have a standard health check url for all microservices, do you have a solution for this issue?, I think the limitation comes from aws-alb-ingress-controller rather than ingress-merge, but if there is a way to have different health checks that would be awesome. Thanks everyone for your effort.

kainlite commented 5 years ago

@fygge on slack gave me the answer:

You can put the health check annotation on the service instead of on the ingress resource. Thereby having one health check per target group / service.

Tested and works ok.

mlsmaycon commented 5 years ago

What about using something similar to how nginx ingress handles it? https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/mergeable-ingress-types

Multiple minions can be applied per master as long as they do not have conflicting paths. If a conflicting path is present then the path defined on the oldest minion will be used.

ALB does that with listener rules priority, where a new rule gets lower priority than existing rules (excluded default rule). Problem will be if you set a priority number that conflicts with an existing rule.

Maybe a new kind of ingress controller is in call here, something that with the Ingress object, controls only target groups and listener rules and attach then to a ALB created at the controller configuration(or at first object request). This is what this issue is asking with the current controller, but for larger organization, this could bring complexity with path or host-header rules, causing problems with overlapping Ingress objects.

tdmalone commented 5 years ago

Relevant: https://github.com/kubernetes-sigs/aws-alb-ingress-controller/issues/914

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

jzangari commented 5 years ago

I'm working on a multi tenant application where each customer will have their own namespace running a copy of the application. We'd still like to use a single ingress point for the cluster using an ALB to route across namespaces rather than and ALB in each namespace. So I'm bumping this before it goes competely stale.

nadundesilva commented 5 years ago

I am also working on a project where we have many helm charts and all of them contain their own ingress resources. It doesn't make sense to bring everything together into a single ingress resource because of the helm chart level grouping of resources.

It would be really nice if we can share a single ALB between ingresses. I would really love to see this feature soon.

polothy commented 5 years ago

/remove-lifecycle rotten

kuznero commented 5 years ago

Sad if it is going to go unnoticed

oceaneLonneux commented 5 years ago

I am currently implementing an ALB with an EKS cluster, we have dozens of services and multiple environments, therefore different ALB is financially not possible.

I'm a bit surprised this hasn't been more mentioned, as to me it is a true problem. I will investigate and hopefully, will come back with how we do. Will try the merge mentioned earlier!

hectoralicea commented 5 years ago

I have a similar situation where my customer wants to create an ALB outside of EKS to load the certificate with short fixed host and alternate long random ALB host name to the ALB. Then when the service is created in EKS using helm it would be exposed using an internal ELB. The idea was to then link the ALB to the ELB. This is proving out to be much more difficult than I envisioned. Any ideas would be appreciated.

email2liyang commented 5 years ago

I'm having the same issue here, could this get noticed by AWS developers? alternatively, is there way for a single ingress yaml file to reference service in different namespaces?

Thanks

M00nF1sh commented 5 years ago

@email2liyang You can try out v1.2.0-alpha.1, which supports multiple Ingress reuse same ALB. (Note: it's not production ready and not compatible with older version yet)

email2liyang commented 5 years ago

@M00nF1sh , Thanks for the quick reply, I'm using ingress controller in production env, I'm very happy when I discover this, because I could combine the ALB ingress controller and ACM out of box to provide HTTPS support, and migrate fromingress-nginx to alb ingress controller, I even write a blog to record my process on how to do it, but in the end, the issue that "ALB ingress could not reference services across namespace " becomes the last hurdle of it and I do not want to create one ALB for one service or at least one ALB for one namespaces

After v1.2.0 is production ready, I will definitely try ALB ingress controller again

Thanks

rparsonsbb commented 5 years ago

I have a similar situation where my customer wants to create an ALB outside of EKS to load the certificate with short fixed host and alternate long random ALB host name to the ALB. Then when the service is created in EKS using helm it would be exposed using an internal ELB. The idea was to then link the ALB to the ELB. This is proving out to be much more difficult than I envisioned. Any ideas would be appreciated.

@hectoralicea it is not possible to point an alb to an elb this is a limitation of AWS functionality.

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html#target-type

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

eytanhanig commented 4 years ago

/remove-lifecycle stale

thunder-spb commented 4 years ago

Any updates here?

styk-tv commented 4 years ago

I will go a bit off topic here > ALB to ELB (Classic). I have been using ELB-C for the past few years and running across very many applications. Single ingress, wildcard certificates terminated at ELB and multi-node ingress. I've have tried HAProxy, Traefik and Nginx (default) with success. If you really push for (differently priced ALB) then its really your issue, if you just want to solve the problem and move on, going back to ELB-Classic may not be a bad idea. It works! Unless you have some extreme examples of why you need multiple cloud load balancers. There are some issues always with a small amount of badly written applications (reverse proxy, session stickyness, oauth redirect) but for the most part you can still get them to work on a single LB.

At the end its a cost analysis issue. If you want to reduce 5 to 1 and your time is expensive then don't do it. If you have 1000 load balancers and you can reduce it to 3, it might be worth a while. You create ELBC outside of Ingress type automation, then you can still map all your services through ELBC, just don't specify the loadbalancer type and manually point the ELBC at all ingress nodeports.

You only need to do it once. then when set up, all your ingress definitions will just intercept "host" header with the name and send traffic to the appropriate service. And best part, once you have ELBC in place with SSL wildcard termination (for those cool services or development clusters), then for every plain ingress (with no tls nor loadbalancer defs) you create it just works. No work required.

And you can take advantage of default backend and take it further into the app where you can handle unlimited number of subdomains on a single app all through valid SSL and single LB. I have been doing this since 2004 and now with K, its just fun.

Don't ask AMZ for advice, we love them, but they will just come in and do a session with all your developers and convince you that its perfectly ok to use as many ALB's as you want. They will not tell you its required, but they will plaster all possible documentation with examples where a single yaml section "loadbalancer" looks so innocent to you and makes their bottom line very very happy.

ratulbasak commented 4 years ago

Any update on this issue?

mayconritzmann commented 4 years ago

I have a similar problem.

Each ingress created in the eks cluster, an alb in my environment goes up.

Anyone else with this problem?

bcmedeiros commented 4 years ago

@ritzmann94 this is not a "problem", it's a "lack of feature", I'd say, and that's all this issue is all about. Everyone here is on the same boat.

I personally have been using a NLB with ingress-nginx, all my ingresses objects get merged on a single nginx and share the same NLB. I know it's not officially supported, but I got sick of waiting for ALB support.

mayconritzmann commented 4 years ago

Hello @brunojcm, when you said: "I know that it is now officially supported, but I was tired of waiting for ALB's support".

I didn't understand it very well, at least in the official documentation we don't have any reference to use just 1 ALB.

bcmedeiros commented 4 years ago

Hello @brunojcm, when you said: "I know that it is now officially supported, but I was tired of waiting for ALB's support".

I didn't understand it very well, at least in the official documentation we don't have any reference to use just 1 ALB.

Sorry, I got auto-corrected, I meant "not", not "now". I was talking about NLB support, the one I ended up using, not ALB. ALB is supported, just not sharing the same ALB for multiple ingresses. NLB is still in alpha/beta (sorry if I'm not up-to-date here), but works with ingress-nginx and supports multiple ingresses sharing the same NLB.

mayconritzmann commented 4 years ago

Understand, I need to deploy AWS WAF in my environment and being in NLB with ingress-nginx is not supported.

Let's wait for this new feature to come.

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

Tzrlk commented 4 years ago

/remove-lifecycle stale

msolimans commented 4 years ago

Any updates?

vprus commented 4 years ago

For avoidance of doubt, this issue is actually fixed in a 1.2 alpha release, specifically docker.io/amazon/aws-alb-ingress-controller:v1.2.0-alpha.1 and I have a dozen of ingresses sharing single ALB. But, that alpha release was made a year ago. It would be very nice to get some clarity whether it's coming in any official form.

kirkdave commented 4 years ago

Would be great to get this feature either merged into the latest code or revitalise v1.2

I would love to use this feature, but also want to be able to use IAM Role for Service Accounts in EKS and having tested the tag v1.0.0-alpha.1 that support hasn't been merged in (works great on v1.1.8).

dmanchikalapudi commented 4 years ago

For avoidance of doubt, this issue is actually fixed in a 1.2 alpha release, specifically docker.io/amazon/aws-alb-ingress-controller:v1.2.0-alpha.1 and I have a dozen of ingresses sharing single ALB. But, that alpha release was made a year ago. It would be very nice to get some clarity whether it's coming in any official form.

How did you get that to work? Can you share the ingress definitions for a couple applications? Are you getting it to work by specifying same ingress name but define a different rule in both? Also, are you using a package / deployment manager like Helm 3? I believe it has validations to not deploy an existing resource (The request to create the ingress does not even get to k8s)

vprus commented 4 years ago

Here's a complete helm template used to define ingress for one particular application.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: {{ .Release.Name }}-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internal
    alb.ingress.kubernetes.io/group.name: analytics
  labels:
    app: {{ .Release.Name }}
spec:
  rules:
    - host: {{ .Release.Name }}.somecompany.a
      http:
        paths:
          - path: /*
            backend:
              serviceName: {{ .Release.Name }}-jobmanager-external
              servicePort: 8081

We use Helm 3, but exact same definition worked in Helm 2 too. The key part is 'group.name' metadata above. Two helm releases that use this template result in two ingress objects, with different names, and then in two target groups used by a single ALB. Other applications define their ingresses in a similar way, and also share the same ALB. The deployment of alb-ingress-controller is basically using default options, except for

image: docker.io/amazon/aws-alb-ingress-controller:v1.2.0-alpha.1
dmanchikalapudi commented 4 years ago

Here's a complete helm template used to define ingress for one particular application.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: {{ .Release.Name }}-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internal
    alb.ingress.kubernetes.io/group.name: analytics
  labels:
    app: {{ .Release.Name }}
spec:
  rules:
    - host: {{ .Release.Name }}.somecompany.a
      http:
        paths:
          - path: /*
            backend:
              serviceName: {{ .Release.Name }}-jobmanager-external
              servicePort: 8081

We use Helm 3, but exact same definition worked in Helm 2 too. The key part is 'group.name' metadata above. Two helm releases that use this template result in two ingress objects, with different names, and then in two target groups used by a single ALB. Other applications define their ingresses in a similar way, and also share the same ALB. The deployment of alb-ingress-controller is basically using default options, except for

image: docker.io/amazon/aws-alb-ingress-controller:v1.2.0-alpha.1

Thanks ! I will give it a try and confirm if its working for our use-cases.

Btw - https://github.com/kubernetes-sigs/aws-alb-ingress-controller/issues/914 states that this is not production ready and should not be used. Any idea when it will be available as an official release?