kubernetes / ingress-gce

Ingress controller for Google Cloud
Apache License 2.0
1.27k stars 302 forks source link

Option to share LB between Ingresses #369

Closed GGotardo closed 2 years ago

GGotardo commented 6 years ago

I want to organize my cluster into multiples namespaces (app1, app2) and work with Ingress to access each of them. Something like:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: app1-ing
  namespace: app1
  annotations:
    kubernetes.io/ingress.global-static-ip-name: ip-ingress-backend
spec:
  rules:
  - host: app1-service1.example.com
    http:
      paths:
      - backend:
          serviceName: nginx-1
          servicePort: 80
        path: /service1
 - host: app1-service2.example.com
    http:
      paths:
      - backend:
          serviceName: nginx-2
          servicePort: 80
        path: /service2
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: app2-ing
  namespace: app2
  annotations:
    kubernetes.io/ingress.global-static-ip-name: ip-ingress-backend
spec:
  rules:
  - host: app2-service1.example.com
    http:
      paths:
      - backend:
          serviceName: nginx-1
          servicePort: 80
        path: /service1
   - host: app2-service2.example.com
    http:
      paths:
      - backend:
          serviceName: nginx-2
          servicePort: 80
        path: /service2

But when I try to do so, the following error is showed while creating the second Ingress:

googleapi: Error 400: Invalid value for field 'resource.IPAddress': 'xxxx'. Specified IP address is in-use and would result in a conflict., invalid

It tries to create another LB, but it should share the same one, just creating new backends/frontends.

rramkumar1 commented 6 years ago

@GGotardo What's preventing you from just giving the ingress in "app-2" namespace a different static-ip?

The ingress-gce controller has no easy way of knowing that you wan't both those ingresses to have the same static-ip. Even if it did, the fact that Ingresses are namespaced means that the controller must respect this separation.

GGotardo commented 6 years ago

@GGotardo What's preventing you from just giving the ingress in "app-2" namespace a different static-ip?

Actually that's not a problem for me, and this is the way I'm doing on GCP, because I have a small development cluster, but I could have a big cluster with many namespaces. So I need 1 LB and 1 static IP for each one of them.

Thinking on GCP Load Balancer, it could be resolved with a single one and multiples backs/fronts.

Is ingress-gce responsible for creating Load Balancers once I create an Ingress Service?

rramkumar1 commented 6 years ago

Yes, ingress-gce is responsible for creating the LB resources given an Ingress specification.

I see your point but like I said before, this is a very ugly problem to solve in the ingress-gce controller. My suggestion would be to either condense the amount of namespaces you need or have enough static IP's available.

Regardless, this is an interesting use case so I'll leave this open as a feature suggestion.

rramkumar1 commented 6 years ago

/kind feature

ashi009 commented 6 years ago

It's not a rare use case in a large shared k8s cluster (L1/L2 GFE are doing the same).

For our case, different teams may use their own namespaces to manage their deployments and services. It shouldn't be their problem to manage things like public DNS setup, TLS termination, cert renewal, etc.

It's also worth mentioning that this is already supported by many other ingress controllers, eg. traefik, nginx. Though, I don't like the idea of putting a L2 SLB behind the GCLB.

A workaround I can think of would be adding a custom resource type, say IngressFragment, and create a controller to join the fragments into a single ingress resource in a dedicated namespace for gce-ingress-controller to consume.

toredash commented 6 years ago

This is a feature we also want. Currently we use a mix of nginx controller and gce controller. High volume service gets their own GCE LB, while normal services uses a shared nginx LB.

ssboisen commented 6 years ago

Another vote for this feature. Another use case is highly dynamic test environments where a new deployment (with ingress) on a per pull request basis is created. It would be very nice if the kubernetes ingress controller for gke would work with multiple ingress resources on the same IP. This way we can define a wild card dns entry that points to this loadbalancer and it would then have paths/hostname mappings based on these individual ingress documents. This is what we do today with the nginx ingress.

JorritSalverda commented 6 years ago

Recombining multiple ingresses into one load balancer is something nginx ingress already does and would be extremely useful to have for GCE ingress as well since it allows applications to set themselves up as backend for a particular route, while keeping their manifests otherwise independent from the other application. This would look like

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: web
spec:
  tls:
  - hosts:
    - www.mydomain.com
    secretName: tls-secret
  rules:
  - host: www.mydomain.com
    http:
      paths:
      - path: /*
        backend:
          serviceName: web
          servicePort: https
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: api
spec:
  tls:
  - hosts:
    - www.mydomain.com
    secretName: tls-secret
  rules:
  - host: www.mydomain.com
    http:
      paths:
      - path: /api/*
        backend:
          serviceName: api
          servicePort: https

I assume implementing this leads to questions about a lot of edge cases and on how to stay within the limits of the url map, but in general it's something like 'If it shares hostnames combine them into one load balancer.'

rramkumar1 commented 6 years ago

If someone wants to tackle this, we will happily accept an implementation. Keep in mind though that this is a messy problem to solve in the code.

/good-first-issue /help-wanted

k8s-ci-robot commented 6 years ago

@rramkumar1: This request has been marked as suitable for new contributors.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed by commenting with the /remove-good-first-issue command.

In response to [this](https://github.com/kubernetes/ingress-gce/issues/369): >If someone wants to tackle this, we will happily accept an implementation. Keep in mind though that this is a messy problem to solve in the code. > >/good-first-issue >/help-wanted Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
agadelshin commented 5 years ago

I'd like to dive into this issue.

rramkumar1 commented 5 years ago

@pondohva Great! Looking forward to the PR.

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

dreh23 commented 5 years ago

We are creating a namespace per dev branch and exposing this to the net. We would like to keep the simplicity of a gce ingress. We will run in a quota (money) issue on gcp. A shared ingress would prevent us from using another ingress controller.

thiagofernandocosta commented 5 years ago

Hi, buddies. Has anyone idea about this issue ? I've been thinking workaround this using helm templates and updating my ingress resource, like as mentioned by @JorritSalverda, but I`m not sure about that.

If someone else has any idea or approach I will appreciate that. Thanks.

thiagofernandocosta commented 5 years ago

Taking advantage, lol. Anyone knows if this is a good approach ? Each deployment I've configured an ManagedCertificate and StaticIp, therefore associating them to my Ingress. I appreciate any help.

apiVersion: extensions/v1beta1 kind: Ingress metadata: name: wp-{{ .Values.app }} annotations: kubernetes.io/ingress.global-static-ip-name: wp-{{ .Values.app }}-external-ip networking.gke.io/managed-certificates: wp-{{ .Values.app }}-certificate spec: rules: - host: {{ .Values.domain }} http: paths: - path: /* backend: serviceName: wp-{{ .Values.app }} servicePort: 80

aeneasr commented 5 years ago

Recombining multiple ingresses into one load balancer is something nginx ingress already does and would be extremely useful to have for GCE ingress as well since it allows applications to set themselves up as backend for a particular route, while keeping their manifests otherwise independent from the other application. This would look like

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: web
spec:
  tls:
  - hosts:
    - www.mydomain.com
    secretName: tls-secret
  rules:
  - host: www.mydomain.com
    http:
      paths:
      - path: /*
        backend:
          serviceName: web
          servicePort: https
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: api
spec:
  tls:
  - hosts:
    - www.mydomain.com
    secretName: tls-secret
  rules:
  - host: www.mydomain.com
    http:
      paths:
      - path: /api/*
        backend:
          serviceName: api
          servicePort: https

I assume implementing this leads to questions about a lot of edge cases and on how to stay within the limits of the url map, but in general it's something like 'If it shares hostnames combine them into one load balancer.'

This is what we're currently dealing with as we're using multiple Helm Charts that each have their own ingress definitions, but we want to combine them under one domain, separated by path. Is there already a workaround here?

edit:// We don't need namespace separation

retpolanne commented 5 years ago

I +1 this issue. We work in a namespace-per-app environment and recently we have hit a limit of 1000 forwarding rules. The solution would be to either aggregate our namespaces (which would be kind of hard for us, given the number of workloads), create another cluster in another project or use ingress-nginx (which means we would lose the benefits of the managed L7 LB).

blurpy commented 4 years ago

We currently have on-prem clusters and considering a move to GKE. Using nginx-ingress we have wildcard dns for our domains to allow developers to choose subdomain or context path in their ingress without any other configuration involved. Not being able to reuse an ip-address across ingresses seem to increase complexity by quite alot. Hoping for a solution to this.

victortrac commented 4 years ago

We currently have on-prem clusters and considering a move to GKE. Using nginx-ingress we have wildcard dns for our domains to allow developers to choose subdomain or context path in their ingress without any other configuration involved. Not being able to reuse an ip-address across ingresses seem to increase complexity by quite alot. Hoping for a solution to this.

@blurpy There's nothing preventing you from using nginx-ingresson GKE to do this today. The nginx-ingress controller will allocate a single GLB with a single public IP address. Set your DNS as a wildcard to to this IP address. Your developers can create as many ingress resources as they want, which can all share this IP address.

blurpy commented 4 years ago

@blurpy There's nothing preventing you from using nginx-ingresson GKE to do this today. The nginx-ingress controller will allocate a single GLB with a single public IP address. Set your DNS as a wildcard to to this IP address. Your developers can create as many ingress resources as they want, which can all share this IP address.

Thanks, good to know. I was hoping to use the managed part of GKE as much as possible though, so I'm still hoping for an improvement here. nginx-ingress is a nightmare for both ops and devs because they don't care about backwards compatibility.

dfernandezm commented 4 years ago

I struggle to understand why GCE ingress is still not in parity with other ingress controllers as of 2020. This is a very desired feature in our workflow as including an ingress as par if a Helm Chart gives a lot of flexibility. Looking forward to the combined power of GLB and incremental ingresses.

robinpercy commented 4 years ago

/assign

I'd like to tackle this one

rramkumar1 commented 4 years ago

@bowei How do you want to proceed with this issue? The new Gateway API will address this problem from the start

bowei commented 4 years ago

This is something that makes more sense for the Gateway GCE implementation as its semantics are better defined.

mijamo commented 4 years ago

Will the gateway GCE implementation be tracked in this repository or somewhere else? Any insight on the timeframe we are looking at (is it 6 months / 2 years away / even further?).

robinpercy commented 4 years ago

Ok, taking my name off this if it's no longer (or soon to not be) desired functionality. Risks of grabbing a two year old issue ;)

lgelfan commented 4 years ago

Can you explain (or provide a link) to how to best use the Gateway API with GKE to solve this problem?

mijamo commented 4 years ago

The Gateway API is not even in Alpha right now so there is no way to use it currently.

You can read about it there https://github.com/kubernetes-sigs/service-apis

This is why I was asking for a timeframe as it probably means we are looking at a long wait time before this feature might be usable in production.

maxpain commented 4 years ago

Any news?

rramkumar1 commented 4 years ago

@Maxpain177 Refer to the previous comment by @mijamo. The Gateway API is still in development. We will update here when a suitable product is ready for you all to try out.

bowei commented 4 years ago

If someone wants to pursue this, it's worth working out the semantics:

  1. What happens in the case of merge conflicts between resources.
  2. Many of the annotations, especially on the frontend side (for example FrontendConfig) don't merge cleanly.
hermanbanken commented 4 years ago

Would be great if this was possible. It would not even need to be restricted to 1 cluster. Suppose 2 clusters can both provision their own "host" or "path" based rules with their own BackendServices.

However, it begins to smell a lot like a full fletched MultiClusterIngress, which as also a thing, which was cancelled/discontinued.

Still, it would be amazing to have this from native GKE.

spencerhance commented 4 years ago

@hermanbanken MCI wasn't cancelled :)

https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-for-anthos

Berndinox commented 4 years ago

Is there actually a way to reuse an lb with multiple ingress resources?

victortrac commented 4 years ago

Is there actually a way to reuse an lb with multiple ingress resources?

You should be able to make multiple ingress resources behind a single GCE ingress controller as long as the ingress resources are in the same namespace.

hermanbanken commented 4 years ago

You should be able to make multiple ingress resources behind a single GCE ingress controller as long as the ingress resources are in the same namespace.

AFAIK this creates another Load Balancer in GCP. Context: we had around 6 ingresses in the same namespace which all got their own LB.

Now that I think on it, that might have been because they had a different global-static-ip-name annotation.

Berndinox commented 4 years ago

Thanks for clarification!


Von: Herman notifications@github.com Gesendet: Wednesday, November 4, 2020 5:00:39 PM An: kubernetes/ingress-gce ingress-gce@noreply.github.com Cc: Berndinox berndinox@gmail.com; Comment comment@noreply.github.com Betreff: Re: [kubernetes/ingress-gce] Option to share LB between Ingresses (#369)

You should be able to make multiple ingress resources behind a single GCE ingress controller as long as the ingress resources are in the same namespace.

AFAIK this creates another Load Balancer in GCP.

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/kubernetes/ingress-gce/issues/369#issuecomment-721818288, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AAISDCUKAFIUHI3S3CGZ3B3SOF3CPANCNFSM4FHCPFJQ.

spencerhance commented 4 years ago

Currently all ingresses map to a unique LB

tpokki commented 4 years ago

As a workaround, you can deploy your own Ingress Controller (e.g. nginx) , but define serviceType to be NodePort for it. And then created single Ingress resource, that routes all the traffic to your ingress controller, for ingress-gce to create L7 load balancer. All application ingress resources should be annotated to use the ingress controller that you deployed, rather than ingress-gce.

This way you can have L7 load balancer, with the L7 features (managed certificates, waf, iap, ...), compared to L4 load balancer that you would get if you deploy your Ingress Controller with LoadBalancer service type.

As a bonus, you can use ingress controller that have other capabilities like url rewrite, that are missing from ingress-gce.

mehdicopter commented 4 years ago

Hey @tpokki do you know where I could find some tutorial/examples in order to implement that ?

thanks !

Berndinox commented 4 years ago

As a workaround, you can deploy your own Ingress Controller (e.g. nginx) , but define serviceType to be NodePort for it. And then created single Ingress resource, that routes all the traffic to your ingress controller, for ingress-gce to create L7 load balancer. All application ingress resources should be annotated to use the ingress controller that you deployed, rather than ingress-gce.

This way you can have L7 load balancer, with the L7 features (managed certificates, waf, iap, ...), compared to L4 load balancer that you would get if you deploy your Ingress Controller with LoadBalancer service type.

As a bonus, you can use ingress controller that have other capabilities like url rewrite, that are missing from ingress-gce.

Can i terminate SSL on the Google LB and handle http Traffic only? Like using managed certs of google but the the custom ingress Controller.

My goal is to avoid SSL inside the cluster cause of Performance!

... i know its Not super Secure.

I‘m curios this is not implemented yet, seems like an strategic finance desicion tbh

tpokki commented 4 years ago

The custom ingress controller is like any other application for which you assign ingress-gce L7 load balancer. Typically the custom ingress controllers are installed with LoadBalancer service type, as that is what is mostly expected. And that is the only thing that you need to change.

Looking at for example the nginx helm chart, I would expect that you should install it with something like this:

helm install --set controller.service.type=NodePort nginx ingress-nginx/ingress-nginx

After that, you create Ingress resource that is something like this (check the service name that the nginx installation creates, as well as the port(s)):

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress
  annotations:
    kubernetes.io/ingress.global-static-ip-name: your-pre-allocated-static-ip-name
    networking.gke.io/managed-certificates: managed-certificate-name
spec:
  backend:
    serviceName: nginx-ingress-nginx-controller
    servicePort: 80

... leave out the static ip name if you don't have one pre-allocated, and managed-certificates if you don't have that. After that you should have L7 load balancer that points to your nginx Ingress Controller.

Application Ingress entries need to be annotated kubernetes.io/ingress.class: nginx so that it is your nginx instance that processes them and not ingress-gce.

mehdicopter commented 4 years ago

I have been trying to do this configuration but unfortunately this is not working :'( I am still having All backend services are in UNHEALTHY state on my GCE ingress...

tpokki commented 4 years ago

I am still having All backend services are in UNHEALTHY state on my GCE ingress...

Did a quick test myself with nginx, and it has a few issues. First, you need to do some firewall openings to make the admission service to work in private clusters (you can also disable it). Second, nginx does not provide "working" health/liveness probes that L7 could adapt. I.e. the L7 makes GET /, and nginx returns 503 for that :(

So apparently using nginx in the example was not a good idea.

Try some other Ingress Controller like Traefik. AFAIK it at least exposes/uses proper liveness/readiness checks that L7 could work with. In order to make nginx work, you would have to modify the liveness/readiness checks to be compatible with L7, and their helm chart does not appear to provide capabilities to do that.

MeNsaaH commented 4 years ago

@mehdicopter, you have to specify BackendConfig for your services/ingresses where you can then specify the appropriate healthchecks. It seems for some reason the default healthcheck endpoint when you use GCE ingresss is /. https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#direct_health

mehdicopter commented 4 years ago

Thanks @MeNsaaH, I did that but still having backend service unhealthy. Anyway I am polluting this issue.

Berndinox commented 3 years ago

Finally, i got a full working example! It's possible to use native google mechanics, without any messy ingress-controller hacks:

Test-Deployment 0

kind: Service
metadata:
  name: hello-kubernetes-1
  #annotations:
   # cloud.google.com/neg: '{"ingress": true}'
spec:
  type: NodePort
  ports:
  - name: port-0
    protocol: TCP
    port: 60000
    targetPort: 8080
  selector:
    app: hello-kubernetes-0
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-kubernetes-0
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-kubernetes-0
  template:
    metadata:
      labels:
        app: hello-kubernetes-0
    spec:
      containers:
      - name: hello-kubernetes-0
        image: paulbouwer/hello-kubernetes:1.8
        ports:
        - containerPort: 8080

Test-Deployment 1

kind: Service
metadata:
  name: hello-kubernetes-1
  #annotations:
   # cloud.google.com/neg: '{"ingress": true}'
spec:
  type: NodePort
  ports:
  - name: port-1
    protocol: TCP
    port: 60001
    targetPort: 8080
  selector:
    app: hello-kubernetes-1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-kubernetes-1
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-kubernetes-1
  template:
    metadata:
      labels:
        app: hello-kubernetes-1
    spec:
      containers:
      - name: hello-kubernetes-1
        image: paulbouwer/hello-kubernetes:1.8
        ports:
        - containerPort: 8080

Managed Cert:

apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
  name: managed-cert
spec:
  domains:
    - test0.DOMAIN.at
    - test1.DOMAIN.at

Ingress:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: hello-k8s-gce-ssl
  annotations:
    kubernetes.io/ingress.class: gce
    networking.gke.io/managed-certificates: managed-cert
spec:
  rules:
  - host: test0.DOMAIN.at
    http:
      paths:
      - backend:
          serviceName: hello-kubernetes-0
          servicePort: port-0
  - host: test1.DOMAIN.at
    http:
      paths:
      - backend:
          serviceName: hello-kubernetes-1
          servicePort: port-1

Wait some minutes.., thats all!

Attention: If VPC-Native Networking is enabled for your cluster it should also work with ClusterIP instead of NodePort

gcloud container clusters create neg-demo-cluster \
    --enable-ip-alias

In my case it did not work and i had to use NodePort. Regarding google Docs it should work (when uncommenting the annotation in the service)

adrian-gierakowski commented 3 years ago

@Berndinox I believe your example doesn’t address the problem raised in this issue as it doesn’t demonstrate that multiple ingress resources can share a single load balancer (you have 1 ingress with multiple backends which share the same load balancer).

Andrewangeta commented 3 years ago

@Berndinox @adrian-gierakowski Right, for example my use case is multiple applications that have their own deployment repositories and each have their own ingress.yml files so they can't share a single file unless I did some funky merging and have a parent ingress etc.