kubernetes-sigs / aws-load-balancer-controller

A Kubernetes controller for Elastic Load Balancers
https://kubernetes-sigs.github.io/aws-load-balancer-controller/
Apache License 2.0
3.9k stars 1.45k forks source link

Installing multiple instances of the ALB controller with different configuration and ingress class into the same namespace (kube-system) #2233

Open dnutels opened 3 years ago

dnutels commented 3 years ago

Describe the bug

I am trying to install multiple, differently configured instances of the ALB controller into kube-system namespace. It won't work because the first instance claims ownership of aws-load-balancer-tls in the target namespace via meta.helm.sh/release-name and prevents the second instance to do the same.

Steps to reproduce

values.yml for the first instance:

clusterName: my-cluster

ingressClass: alb-a
watchNamespace: app

fullnameOverride: alb-a-controller

serviceAccount:
  create: false
  name: aws-load-balancer-controller

defaultTags:
  ingressClass: alb-a

values.yml for the second instance:

clusterName: my-cluster

ingressClass: alb-b
watchNamespace: app

fullnameOverride: alb-b-controller

serviceAccount:
  create: false
  name: aws-load-balancer-controller

defaultTags:
  ingressClass: alb-b

Then the instances are deployed using:

helm upgrade -i alb-a-controller eks/aws-load-balancer-controller -n kube-system -f alb-a/values.yml
helm upgrade -i alb-b-controller eks/aws-load-balancer-controller -n kube-system -f alb-b/values.yml

Expected outcome

Both controller instances are deployed and each handles separate ingress class.

Actual outcome

Error: rendered manifests contain a resource that already exists. 
Unable to continue with install: Secret "aws-load-balancer-tls" in 
namespace "kube-system" exists and cannot be imported into 
the current release: invalid ownership metadata; annotation 
validation error: key "meta.helm.sh/release-name" must equal 
"alb-b-controller": current value is "alb-a-controller"

Environment

Additional Context:

dnutels commented 3 years ago

Additional information...

Installing alb-a and alb-b controllers into different namespaces doesn't work either. After installing alb-a into infra-a namespace (worked fine), while trying to install alb-b into infra-b namespace:

Error: rendered manifests contain a resource that already exists. 
Unable to continue with install: MutatingWebhookConfiguration 
"aws-load-balancer-webhook" in namespace "" exists and cannot 
be imported into the current release: invalid ownership metadata; 
annotation validation error: key "meta.helm.sh/release-name" 
must equal "alb-b-controller": current value is "alb-a-controller"; 
annotation validation error: key "meta.helm.sh/release-namespace" 
must equal "infra-b": current value is "infra-a"

It appears that the webhook is non-shareable and is installed into a different namespace from the controller?

M00nF1sh commented 3 years ago

@dnutels The current controller is designed to run as a single deployment, and we have updated our docs to reflect that: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/deploy/configurations/

what's your use case to run multiple deployments instead of a single one?

dnutels commented 3 years ago

Thank you for clarifying, I somehow missed that one. It's somewhat academic.

The main use case it to be able to configure the controller differently for different namespaces/ingress classes. I realize that at this point most (but not all) of the controller configuration can be overridden on the Ingress level.

I would imagine that having different service accounts might be useful...

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

willthames commented 2 years ago

Our use case is to have an alb-internal and an alb-external IngressClass, and then set the scheme of the ALB in the associated IngressClassParams so that we don't have to annotate every single Ingress with ALB annotations.

At the moment the external ALB controller creates the stack for an external ingress and then the internal ALB controller removes it all again (I did think that was a bug but this issue makes it clear that it's more of an unimplemented feature)

willthames commented 2 years ago

/remove-lifecycle rotten

willthames commented 2 years ago

Another reason for wanting a separate ingress class is for use with external-dns

We use separate external-dns controllers, one that controls private DNS and one that controls public DNS, so that the controller knows which zones to manage records

We use annotation filters (and more likely ingress class filters soon) to associate a particular ingress class with a particular external-dns controller. Currently therefore we can only manage only one of public or private ALB ingresses

willthames commented 2 years ago

Yet a third reason is if you need a network load balancer and an application load balancer for different workloads (for example, doing TCP passthrough for one workload vs needing WAF protection for another workload)

visit1985 commented 2 years ago

Our use case is to have an alb-internal and an alb-external IngressClass, and then set the scheme of the ALB in the associated IngressClassParams so that we don't have to annotate every single Ingress with ALB annotations.

At the moment the external ALB controller creates the stack for an external ingress and then the internal ALB controller removes it all again (I did think that was a bug but this issue makes it clear that it's more of an unimplemented feature)

We have this use case as well. I got the alb-ingress helm chart (v2.4.1) deployed twice by specifying nameOverride and fullnameOverride with a suffix. Only one of the controllers is doing all the work, because both deployments still share the same ConfigMap for leader election. Looks like this works for our ALB + EKS Fargate setup only. We'd hit #2185 otherwise.

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

visit1985 commented 2 years ago

/remove-lifecycle stale

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

visit1985 commented 1 year ago

We are still here… /remove-lifecycle stale

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

dim-at-ocp commented 1 year ago

Still an issue. Confirming the use-case - public vs. internal ALBs - and the need for separate configuration.

In fact, instead of installing multiple controller instances a more elegant solution could probably be introducing support for multiple ingressClass definitions handled by a single controller (helm release).

Thank you!

/remove-lifecycle stale

soluwalana commented 1 year ago

I'm having a similar issue. The inability to specify watchNamespace for multiple namespaces and the inability to create multiple deployments means that it is impossible to deploy load balancers for only 2 specific, externally facing namespaces.

kmoorejr9 commented 1 year ago

Would like to echo the sentiments above, particularly for using this to manage public/private DNS alongside external-dns.

acjohnson commented 1 year ago

We currently work around this limitation with a script that temporarily removes the MutatingWebhookConfiguration and ValidatingWebhookConfiguration objects, creates TargetGroupBindings that the secondary aws-load-balancer-controllers use, then recreate the MutatingWebhookConfiguration and ValidatingWebhookConfiguration objects...

+1 for actual multiple aws-load-balancer-controller support. Also thank you @M00nF1sh and others who have continually improved this project!

dbfreem commented 1 year ago

would love to see this!

flaviomoringa commented 11 months ago

Hi,

we have EXACTLY this need, having and internal-lb and a external-lb controllers running, and then allow our users to choose the correct ingress class pointing to the correct LB.

We are using nginx-ingress controller and where evaluating moving to aws controller, which we thought would be really easy when we found this issue :-(

This is a total deal-breaker for us, and we will not be able to move forward to replace nginx-controller due to this.

Even worst is that on the oficial documentation it's mentioned AWS is working on this: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.6/deploy/configurations/ (see limitation warning last line on top)

But it then points to this issue that is closed: https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2185

How is this closed if it's not resolved? It should be a priority making this work, it makes no sense not being able to have multiple LB's configuration running.

Hope we get news about this soon.

flaviomoringa commented 11 months ago

I think the targetgroupbinding issue could be solved if the controller supported some sort of required annotation or label on the targetgroupbinding objects. That way it could easily determine which targetgroupbinding's go with which aws-load-balancer-controller installation

This seems a simple and nice approach - mentioned in here.

k8s-triage-robot commented 8 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

FernandoMiguel commented 8 months ago

/remove-lifecycle stale

jcogilvie commented 7 months ago

Chiming in that I would also benefit from the ability to serve multiple IngressClasses from one deployment for internal vs external purposes.

flaviomoringa commented 7 months ago

Amazing how this still seems a low priority ticket... and we are paying to use EKS... this is really sad :-(

ns-mkusper commented 6 months ago

I have the same internal/external ingressClass need. Would love to see some movement on this.

nd-at-globetel commented 6 months ago

I have the same issue as well. I should be able to create both internal and external ingress. Any updates?

nd-at-globetel commented 6 months ago

Still an issue. Confirming the use-case - public vs. internal ALBs - and the need for separate configuration.

In fact, instead of installing multiple controller instances a more elegant solution could probably be introducing support for multiple ingressClass definitions handled by a single controller (helm release).

Thank you!

/remove-lifecycle stale

@dim-at-ocp Does the installation of multiple controller instances works? (e.g. having an internal ingress and an external ingress)

aws-lb-controller instance#1 -> internal ingress (private alb) aws-lb-controller instance#2 -> external ingress (public alb)

Thank you!

visit1985 commented 6 months ago

I guess you still hit #2185 with that approach.

nd-at-globetel commented 6 months ago

@visit1985 Thanks for the response. I didn't notice that there's also an open issue https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2185 (Allow multiple controller deployment per cluster) regarding deployment of multiple controller instances within a single cluster.

I've thought it will work as a workaround in the meantime while this issue https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2233 is still open. It kinda sucks, we have a use case for exposing an internal ingress and external ingress using aws-lb-controller.

visit1985 commented 6 months ago

I can just give an update on my impl.: We are currently installing a single controller on EKS Fargate via helm-chart with createIngressClassResource=false + ingressClassParams.create=false and then deploying the IngressClass and IngressClassParams 2 or more times depending on our needs.

All IngressClasses are handled by a singe controller without issues in our scenario. We only use it to provision ALBs from ingresses. No other use-cases like NLBs or k8s services etc.

nd-at-globetel commented 6 months ago

@visit1985 Thanks and in your implem, since it provisions multiple ALBs as ingresses with a single controller, were you guys able to implement an internal and external ingress with it?

visit1985 commented 6 months ago

Yes, but as stated, on a Fargate only cluster. We didn’t test it with EC2 node groups.

nd-at-globetel commented 6 months ago

@visit1985 Got it, Fargate only cluster, what is the target type of those ALBs? is it private IPs?

We're using EC2 node groups

visit1985 commented 6 months ago

@nd-at-globetel alb.ingress.kubernetes.io/target-type=ip

talkerbox commented 3 months ago

Yet another use case - if I want to deploy another LB controller (for using different IngressClass for each) in another VPC (like in this blogpost https://aws.amazon.com/blogs/containers/expose-amazon-eks-pods-through-cross-account-load-balancer/, not only in VPC-account-A, but and in the current EKS cluster VPC-account-B.

nd-at-globetel commented 3 months ago

@talkerbox how about in the same VPC (EKS Cluster)? Is it still not supported?

k8s-triage-robot commented 2 weeks ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

flaviomoringa commented 2 weeks ago

Please don't close this ticket.. this is a basic need for many of the users.

talkerbox commented 2 weeks ago

/remove-lifecycle stale