kubernetes / ingress-nginx

Ingress-NGINX Controller for Kubernetes
https://kubernetes.github.io/ingress-nginx/
Apache License 2.0
17.27k stars 8.21k forks source link

Additional Internal Load Balancer on single deployment #9285

Closed ismailyenigul closed 1 week ago

ismailyenigul commented 1 year ago

What happened: Trying to understand how https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx#additional-internal-load-balancer works. When you enable internal controller service

controller:
  service:
    internal:
      enabled: true

there will be single ingress class but internal and external controller. How can I specify a ingress resource load balancer type(internal or internet-facing? I am also using external-dns, how do I tell external-dns which load balancer address to use? that was easy if I am creating multiple ingress deployment in two different namespace as described at https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/ but no idea how to manage this in single deployment with two different controller?

k8s-ci-robot commented 1 year ago

@ismailyenigul: This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
strongjz commented 1 year ago

What Cloud environment are you deploying too?

From the docs you need both values set, can you confirm they are?

controller.service.internal.enabled controller.service.internal.annotations

If one of them is missing the internal load balancer will not be deployed. Example you may have controller.service.internal.enabled=true but no annotations set, in this case no action will be taken.

controller.service.internal.annotations varies with the cloud service you're using.

/triage needs-more-information /priority backlog /assign @strongjz /kind support

k8s-ci-robot commented 1 year ago

@strongjz: The label(s) triage/needs-more-information cannot be applied, because the repository doesn't have them.

In response to [this](https://github.com/kubernetes/ingress-nginx/issues/9285#issuecomment-1310500668): >What Cloud environment are you deploying too? > >From the [docs ](https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx#additional-internal-load-balancer) you need both values set, can you confirm they are? > >``` >controller.service.internal.enabled controller.service.internal.annotations > >If one of them is missing the internal load balancer will not be deployed. Example you may have controller.service.internal.enabled=true but no annotations set, in this case no action will be taken. > >controller.service.internal.annotations varies with the cloud service you're using. >``` >/triage needs-more-information >/priority backlog >/assign @strongjz >/kind support > Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
strongjz commented 1 year ago

/triage needs-information

ismailyenigul commented 1 year ago

Hi @strongjz I am testing on AWS EKS. here is my service annotations:

# Source: ingress-nginx/charts/ingress-nginx/templates/controller-service-internal.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "instance"
    service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:..."
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
    service.beta.kubernetes.io/aws-load-balancer-type: "external"
  labels:
    helm.sh/chart: ingress-nginx-4.2.5
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: iy
    app.kubernetes.io/version: "1.3.1"
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: iy-ingress-nginx-controller-internal
  namespace: default
spec:
  type: "LoadBalancer"
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
      appProtocol: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: http
      appProtocol: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: iy
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/charts/ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "instance"
    service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:..."
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
    service.beta.kubernetes.io/aws-load-balancer-type: "external"
  labels:
    helm.sh/chart: ingress-nginx-4.2.5
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: iy
    app.kubernetes.io/version: "1.3.1"
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: iy-ingress-nginx-controller
  namespace: default
spec:
  type: LoadBalancer
  ipFamilyPolicy: SingleStack
  ipFamilies: 
    - IPv4
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
      appProtocol: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: http
      appProtocol: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: iy
    app.kubernetes.io/component: controller

Above services will create two load balancer. Internal and internet-facing But my question is about attaching a kind: Ingress to one of them. Lets say ingress with host dev.mydomain.com should be attached to internal load balancer and ingress with host prod.mydomain.com should be attached to internet-facing load balancer.

There is single ingressClass deployment with - --controller-class=k8s.io/ingress-nginx How does a ingress resource will know which service controller going to use? How can I configure ingress resource to use internal load balancer or external one? I dont see any annotation to specify internal/external service above at https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md

Actually, this is more about integration between external-dns and ingress-nginx. It will work fine if I route dns to correct load balancer in route 53 manually. But I want to do it with external-dns automatically.

strongjz commented 1 year ago

have you tried this?

What does the external-dns setup look like?

It looks like if the service annotations are set right,

annotations:
    external-dns.alpha.kubernetes.io/hostname

And the external-dns is set to both --aws-zone-type

 args:
            - --source=service
            - --source=ingress
            - --domain-filter=example.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
            - --provider=aws
            - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
            - --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
            - --registry=txt
            - --txt-owner-id=external-dns

It should work.

https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md

ismailyenigul commented 1 year ago

currently, I use single external-dns and two different ingress controller for public and private ingress resources. no zone filter on external-dns it creates records based on nginx ingressclass name. and I use same domain for public and private ingress resources.

But if I use single ingress class with two different controller, I could not find a way to configure external dns for both controllers on single ingress nginx deployment. I guess this setup is not possible.

longwuyuan commented 1 year ago

/remove-kind bug

that-kampe-steak commented 1 year ago

This still does not explain how to utilize the internal load balancer using an Ingress resource.

I've enabled both external and internal - I want to utilize JUST the internal LB for a given service, how do I reference this within the Ingress ? because using ingressClassName: nginx-internal does not work as there's no ingress class titled that when using the helm chart.

longwuyuan commented 1 year ago

post output of

aprohorov-callsign commented 1 year ago

This still does not explain how to utilize the internal load balancer using an Ingress resource.

I've enabled both external and internal - I want to utilize JUST the internal LB for a given service, how do I reference this within the Ingress ? because using ingressClassName: nginx-internal does not work as there's no ingress class titled that when using the helm chart.

Yeah, it's sad. Look's this configuration just create 2 LBs and that's it. And it means, anyone how know external LN hostname can get access to all internal resources.

RuiSMagalhaes commented 1 year ago

This still does not explain how to utilize the internal load balancer using an Ingress resource.

I've enabled both external and internal - I want to utilize JUST the internal LB for a given service, how do I reference this within the Ingress ? because using ingressClassName: nginx-internal does not work as there's no ingress class titled that when using the helm chart.

Exactly my issue. I want to deploy only 1 deployment of the ingress-nginx chart but for both internal and external load balancers, without success 🤷‍♂️

This is what I have:

controller:
  service:
    type: LoadBalancer
    externalTrafficPolicy: "Local"
    annotations:
      # AWS Load Balancer Controller Annotations
      service.beta.kubernetes.io/aws-load-balancer-type: nlb-ip
      service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
      service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
      # External DNS Annotations
      external-dns.alpha.kubernetes.io/hostname: ${hostname}
    internal:
      # -- Enables an additional internal load balancer (besides the external one).
      enabled: true
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-type: nlb-ip
        service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
        service.beta.kubernetes.io/aws-load-balancer-name: "eks-nlb-internal"
        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
        service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    targetPorts:
      http: http
      https: https

however, if I use IngressClassName: nginx I get the public nlb assigned to the ingress. Nothing changes with service.beta.kubernetes.io/aws-load-balancer-scheme: "internal" on the ingress either.

Is the only option deploy 2 versions of the helm chart? one for public and another for private?

longwuyuan commented 1 year ago

after you have configured for internal-lb, post the information here about it so readers can know what you did. For example, show ;

kubectl -n ingress-nginx get all -o wide
kubectl -n $appns show ing,svc -o wide
imtpot commented 11 months ago

The same. When I deploy nginx with internal enabled, I get two nginx services with different Ips of LB, but as I have only one Ingress Class, new resources always pining to first aka external service which uses external LB. As result I can reach resource by Host header using as external as internal LB.

kubectl -n nginx get svc/nginx-ingress-nginx-controller -o yaml
apiVersion: v1
kind: Service
...
status:
...
  loadBalancer:
    ingress:
    - ip: 20.0.10.11
kubectl -n nginx get svc/nginx-ingress-nginx-controller-internal -o yaml
apiVersion: v1
kind: Service
  ...
status:
...
  loadBalancer:
    ingress:
    - ip: 20.0.10.12

But:

kubectl -n podinfo get ingress/podinfo -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
...
status:
  loadBalancer:
    ingress:
    - ip: 20.0.10.11
curl -L -H  "Host: podinfo.local.lan" 20.0.10.11:80
{
  "hostname": "podinfo-7457947f95-2q8fw",
  "version": "6.5.1",
  "revision": "0bc496456d884e00aa9eb859078620866aa65dd9",
  "color": "#34577c",
  "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif",
  "message": "greetings from podinfo v6.5.1",
  "goos": "linux",
  "goarch": "amd64",
  "runtime": "go1.21.1",
  "num_goroutine": "8",
  "num_cpu": "16"
}
curl -L -H  "Host: podinfo.local.lan" 20.0.10.12:80
{
  "hostname": "podinfo-7457947f95-2q8fw",
  "version": "6.5.1",
  "revision": "0bc496456d884e00aa9eb859078620866aa65dd9",
  "color": "#34577c",
  "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif",
  "message": "greetings from podinfo v6.5.1",
  "goos": "linux",
  "goarch": "amd64",
  "runtime": "go1.21.1",
  "num_goroutine": "9",
  "num_cpu": "16"
}

If I deploy separated nginx with new ingressClassResource.name and controllerValue, all works as expected

cat helm/nginx/values.yaml
controller:
  ingressClassResource:
    default: true
  service:
    annotations:
      io.cilium/lb-ipam-ips: 20.0.10.11
cat helm/nginx-internal/values.yaml
controller:
  ingressClassResource:
    name: nginx-internal
    controllerValue: "k8s.io/ingress-nginx-internal"
  service:
    annotations:
      io.cilium/lb-ipam-ips: 20.0.10.12
kubectl -n podinfo get ingress/podinfo -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
...
spec:
  ingressClassName: nginx-internal
...
status:
  loadBalancer:
    ingress:
    - ip: 20.0.10.12
curl -L -H  "Host: podinfo.local.lan" 20.0.10.11:80
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
curl -L -H  "Host: podinfo.local.lan" 20.0.10.12:80
{
  "hostname": "podinfo-7457947f95-887pt",
  "version": "6.5.1",
  "revision": "0bc496456d884e00aa9eb859078620866aa65dd9",
  "color": "#34577c",
  "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif",
  "message": "greetings from podinfo v6.5.1",
  "goos": "linux",
  "goarch": "amd64",
  "runtime": "go1.21.1",
  "num_goroutine": "8",
  "num_cpu": "16"
}
bonovoxly commented 10 months ago

Having the same issue. Seems everyone is having trouble parsing the problem:

Its very easy to have an ingress-nginx deployment create both an external and internal load balancer. works like a charm.

The problem is, using it.

How do you specify an ingress resource that uses the internal load balancer, instead of the external one?.

g-roliveira commented 6 months ago

Why doesn't this have a solution or answer? I'm going through the same difficulty. Could it really be that the only solution is to implement the helm chart twice?

longwuyuan commented 6 months ago

@g-roliveira factors involved are ;

This is why several users are confirmed to be using https://kubernetes.github.io/ingress-nginx/user-guide/k8s-122-migration/#how-can-i-easily-install-multiple-instances-of-the-ingress-nginx-controller-in-the-same-cluster

longwuyuan commented 1 week ago

The final reality on this topic is that although the internal-LB can be provisioned with Annotations gleaned from the infra-provider cloud, the practical use of the internal-LB is popular with integration for external-dns etc. The controler does not return the hostname for the internal-LB and hence external-DNS can not sync the nameserver zone for internal use.

Secondly its more easy to just install a second-instance of the controller as described here https://kubernetes.github.io/ingress-nginx/faq/#multiple-controller-in-one-cluster and use the cloud-provider annotations to set the internal nature of the LB.

Hence closing this issue as there are no resources to either fix the external-dns integration or for enhancing user-experience on the combined provisioning of internal+external LB from one single install of the controller.

This issue is thus adding to the tally of open issues without tracking any action item. The project is focussed on implementing the Gateway-API, making the controller secure by default out of the box, removing or reducing features that are not part of the K8S KEP spec for the Ingress-API. Hence closing this issue,

/close

k8s-ci-robot commented 1 week ago

@longwuyuan: Closing this issue.

In response to [this](https://github.com/kubernetes/ingress-nginx/issues/9285#issuecomment-2336629357): >The final reality on this topic is that although the internal-LB can be provisioned with Annotations gleaned from the infra-provider cloud, the practical use of the internal-LB is popular with integration for external-dns etc. The controler does not return the hostname for the internal-LB and hence external-DNS can not sync the nameserver zone for internal use. > >Secondly its more easy to just install a second-instance of the controller as described here https://kubernetes.github.io/ingress-nginx/faq/#multiple-controller-in-one-cluster and use the cloud-provider annotations to set the internal nature of the LB. > >Hence closing this issue as there are no resources to either fix the external-dns integration or for enhancing user-experience on the combined provisioning of internal+external LB from one single install of the controller. > >This issue is thus adding to the tally of open issues without tracking any action item. The project is focussed on implementing the Gateway-API, making the controller secure by default out of the box, removing or reducing features that are not part of the K8S KEP spec for the Ingress-API. Hence closing this issue, > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.