kubernetes / ingress-nginx

Ingress NGINX Controller for Kubernetes
https://kubernetes.github.io/ingress-nginx/
Apache License 2.0
17.47k stars 8.25k forks source link

Yet another "preserve the client IP behind nginx-ingress-controller, ExternalTrafficPolicy: Local not working" #9749

Closed venutol closed 1 month ago

venutol commented 1 year ago

What happened:

I'm trying to preserve the client source IP, the connection goes through the ingress controller, TLS gets terminated at the ingress controller. The client is outside the kubernetes network. The server is a Pod inside the kubernetes network, it's and echo webserver which let's me debug the headers and more importantly the client ip address. The client ip address returned is an ip address internal of the kubernetes network and not the real client ip address. This is what the echo webserver returns:

CLIENT VALUES:
**client_address=10.244.2.1**
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://myip.mycustomdomain.com:8080/

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
accept-encoding=gzip, deflate, br
accept-language=en-US,en;q=0.5
host=myip.mycustomdomain.com
sec-fetch-dest=document
sec-fetch-mode=navigate
sec-fetch-site=none
sec-fetch-user=?1
upgrade-insecure-requests=1
user-agent=Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0
**x-forwarded-for=X.X.X.75** 
x-forwarded-host=myip.mycustomdomain.com
x-forwarded-port=443
x-forwarded-proto=https
x-forwarded-scheme=https
**x-real-ip=X.X.X.75**
x-request-id=2b7db5bf9659edb0bfb779c7b74953cd
x-scheme=https
BODY:
-no body in request-

x-real-ip headers and x-forwarded-for headers are displaying the correct ip. client_address is wrong and is an internal k8s ip.

My architecture is as follows:

HAproxy -> k8s Ingress controller -> worker nodes

Haproxy is configured to use the proxy protocol with kubernetes ingress controller, so I get the real IP address at the ingress controller. The next step where the requests gets forwarded to a Pod, the IP address is lost (the headers stay correct of course)

What you expected to happen:

I'm expecting to get the real client address, as described by: https://kubernetes.io/docs/tutorials/services/source-ip/ without relaying on the headers.

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):

kubectl exec -it -n $ingress_ns $podname -- /nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.6.4
  Build:         69e8833858fb6bda12a44990f1d5eaa7b13f4b75
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.21.6

-------------------------------------------------------------------------------

Kubernetes version (use kubectl version):

kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.26.2
Kustomize Version: v4.5.7
Server Version: v1.26.2

Environment:

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR default service/kubernetes ClusterIP 10.96.0.1 443/TCP 3d14h default service/my-release-ingress-nginx-controller NodePort 10.104.209.202 80:32710/TCP,443:32157/TCP 2d16h app.kubernetes.io/component=controller,app.kubernetes.io/instance=my-release,app.kubernetes.io/name=ingress-nginx default service/my-release-ingress-nginx-controller-admission ClusterIP 10.99.110.199 443/TCP 2d16h app.kubernetes.io/component=controller,app.kubernetes.io/instance=my-release,app.kubernetes.io/name=ingress-nginx default service/nodeport NodePort 10.108.248.5 80:31830/TCP 25h app=source-ip-app kube-system service/kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 3d14h k8s-app=kube-dns

NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR default daemonset.apps/my-release-ingress-nginx-controller 3 3 3 3 3 kubernetes.io/os=linux 2d16h controller registry.k8s.io/ingress-nginx/controller:v1.6.4@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f app.kubernetes.io/component=controller,app.kubernetes.io/instance=my-release,app.kubernetes.io/name=ingress-nginx kube-flannel daemonset.apps/kube-flannel-ds 4 4 4 4 4 3d14h kube-flannel docker.io/flannel/flannel:v0.21.3 app=flannel,k8s-app=flannel kube-system daemonset.apps/kube-proxy 4 4 4 4 4 kubernetes.io/os=linux 3d14h kube-proxy registry.k8s.io/kube-proxy:v1.26.2 k8s-app=kube-proxy

NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR default deployment.apps/source-ip-app 1/1 1 1 3d10h echoserver registry.k8s.io/echoserver:1.4 app=source-ip-app kube-system deployment.apps/coredns 2/2 2 2 3d14h coredns registry.k8s.io/coredns/coredns:v1.9.3 k8s-app=kube-dns

NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR default replicaset.apps/source-ip-app-75dbbff4f 1 1 1 3d10h echoserver registry.k8s.io/echoserver:1.4 app=source-ip-app,pod-template-hash=75dbbff4f kube-system replicaset.apps/coredns-787d4945fb 2 2 2 3d14h coredns registry.k8s.io/coredns/coredns:v1.9.3 k8s-app=kube-dns,pod-template-hash=787d4945fb


  - `kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>`

kubectl -n default describe po my-release-ingress-nginx-controller-7llp2 Name: my-release-ingress-nginx-controller-7llp2 Namespace: default Priority: 0 Service Account: my-release-ingress-nginx Node: deb-kuber4/10.0.0.163 Start Time: Wed, 15 Mar 2023 14:52:11 +0100 Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=my-release app.kubernetes.io/name=ingress-nginx controller-revision-hash=58b8c7f5bd pod-template-generation=6 Annotations: kubectl.kubernetes.io/restartedAt: 2023-03-15T14:50:21+01:00 Status: Running IP: 10.0.0.163 IPs: IP: 10.0.0.163 Controlled By: DaemonSet/my-release-ingress-nginx-controller Containers: controller: Container ID: containerd://10f79721ce2fd3063e6c2a8c8d5ad2075096d4d93b523d4eb3eac4f8db5d9ee9 Image: registry.k8s.io/ingress-nginx/controller:v1.6.4@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f Image ID: registry.k8s.io/ingress-nginx/controller@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f Ports: 80/TCP, 443/TCP, 8443/TCP Host Ports: 80/TCP, 443/TCP, 8443/TCP Args: /nginx-ingress-controller --publish-service=$(POD_NAMESPACE)/my-release-ingress-nginx-controller --election-id=my-release-ingress-nginx-leader --controller-class=k8s.io/ingress-nginx --ingress-class=nginx --configmap=$(POD_NAMESPACE)/my-release-ingress-nginx-controller --validating-webhook=:8443 --validating-webhook-certificate=/usr/local/certificates/cert --validating-webhook-key=/usr/local/certificates/key State: Running Started: Wed, 15 Mar 2023 14:52:12 +0100 Ready: True Restart Count: 0 Requests: cpu: 100m memory: 90Mi Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5 Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3 Environment: POD_NAME: my-release-ingress-nginx-controller-7llp2 (v1:metadata.name) POD_NAMESPACE: default (v1:metadata.namespace) LD_PRELOAD: /usr/local/lib/libmimalloc.so Mounts: /usr/local/certificates/ from webhook-cert (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tp5z8 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: webhook-cert: Type: Secret (a volume populated by a Secret) SecretName: my-release-ingress-nginx-admission Optional: false kube-api-access-tp5z8: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: node.kubernetes.io/disk-pressure:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/network-unavailable:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists node.kubernetes.io/pid-pressure:NoSchedule op=Exists node.kubernetes.io/unreachable:NoExecute op=Exists node.kubernetes.io/unschedulable:NoSchedule op=Exists Events: Type Reason Age From Message


Normal RELOAD 27m (x4 over 19h) nginx-ingress-controller NGINX reload triggered due to a change in configuration


  - `kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>`

kubectl -n default describe svc my-release-ingress-nginx-controller Name: my-release-ingress-nginx-controller Namespace: default Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=my-release app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx app.kubernetes.io/version=1.6.4 helm.sh/chart=ingress-nginx-4.5.2 Annotations: meta.helm.sh/release-name: my-release meta.helm.sh/release-namespace: default Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=my-release,app.kubernetes.io/name=ingress-nginx Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.104.209.202 IPs: 10.104.209.202 Port: http 80/TCP TargetPort: http/TCP NodePort: http 32710/TCP Endpoints: 10.0.0.163:80,10.0.0.166:80,10.0.0.167:80 Port: https 443/TCP TargetPort: https/TCP NodePort: https 32157/TCP Endpoints: 10.0.0.163:443,10.0.0.166:443,10.0.0.167:443 Session Affinity: None External Traffic Policy: Local Events:


- **Current state of ingress object, if applicable**:
- check next section

**How to reproduce this issue**:

As minimally and precisely as possible. Keep in mind we do not have access to your cluster or application.
Help up us (if possible) reproducing the issue using minikube or kind.

## Install Kubernetes following

https://kubernetes.io/docs/setup/

## Install the ingress controller

helm install my-release ingress-nginx/ingress-nginx --set controller.hostNetwork=true,controller.service.type="NodePort",controller.service.externalTrafficPolicy=Local controller.replicaCount=3 --debug


## Install an application that will act as default backend (is just an echo app)

kubectl create deployment source-ip-app --image=registry.k8s.io/echoserver:1.4

## Create service

kubectl expose deployment source-ip-app --name=nodeport --port=80 --target-port=8080 --type=NodePort

Remember to patch service:

kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}'


## Create an ingress (please add any additional annotation required)

apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ip-ingress namespace: default spec: tls:

make a request

Just browse to myip.mycustomdomain.com to get the output

Additional info, I have proxy protocol enabled in Haproxy and ingress controller and is working like a treat:

kubectl describe configmap my-release-ingress-nginx-controller
Name:         my-release-ingress-nginx-controller
Namespace:    default
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=my-release
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.6.4
              helm.sh/chart=ingress-nginx-4.5.2
Annotations:  meta.helm.sh/release-name: my-release
              meta.helm.sh/release-namespace: default

Data
====
allow-snippet-annotations:
----
true
enable-real-ip:
----
false
use-forwarded-headers:
----
false
use-proxy-protocol:
----
true

BinaryData
====

Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  UPDATE  49m (x3 over 19h)  nginx-ingress-controller  ConfigMap default/my-release-ingress-nginx-controller
  Normal  UPDATE  49m (x3 over 19h)  nginx-ingress-controller  ConfigMap default/my-release-ingress-nginx-controller
  Normal  UPDATE  49m (x3 over 19h)  nginx-ingress-controller  ConfigMap default/my-release-ingress-nginx-controller

TLDR:

This works but only while inside the cluster, without passing through the ingress controller:

https://kubernetes.io/docs/tutorials/services/source-ip/

and yes, ExternalTrafficPolicy: Local is enabled.

edit :

Proof the ingress controller is getting the true ip address, but the pod doesn't:

Ingress controller:
kubectl logs my-release-ingress-nginx-controller-d8zs9

X.X.X.75 - - [16/Mar/2023:11:32:59 +0000] "GET / HTTP/2.0" 200 1005 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" 383 0.001 [default-nodeport-8080] [] 10.244.2.46:8080 1199 0.001 200 450fcd421611c12fdf9791015a46e151

Pod:
kubectl logs source-ip-app-75dbbff4f-hg97m

10.244.2.1 - - [16/Mar/2023:11:32:59 +0000] "GET / HTTP/1.1" 200 1199 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0"
k8s-ci-robot commented 1 year ago

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
longwuyuan commented 1 year ago

/remove-kind bug /kind support

@venutol your use case is not documented or well-known. I understand you need the haproxy in front of the controller and you want to run the controller with a service of type nodePort.

From the project you can expect comments if you curl to a node and the nodePort like curl nodeipaddress:32710 because of this line

default       service/my-release-ingress-nginx-controller             NodePort    10.104.209.202   <none>        80:32710/TCP,443:32157/TCP   2d16h   app.kubernetes.io/component=controller,app.kubernetes.io/instance=my-release,app.kubernetes.io/name=ingress-nginx

So maybe wait for other people who use the controller like you do to make comments.

If you were using the controller with AWS-LB, GCP-LB etc or even metallb if not on known-cloud-provider with a LB-Service, then there are many users who have same config to compare notes with.

longwuyuan commented 1 year ago

Just to note, installing controller with a service of type nodePort and then creating a ingress object that use a backend service that is also of type nodePort, causes hairpin connections.

HAProxy in front of nodePort service of controller is not tested so that config users have to mostly rely on other users of same config & design.

Common use case is AWS-LB, GCP-LB, Azure-LB type of cloud-provider LB Services in front of the controller or Metallb (the LB is provisioned automatically on installing controller with default service type LoadBalancer (and ingress objects in these scenes use backends of service --type clusterIP)

venutol commented 1 year ago

I understand I have an unusual setup, it's kubernetes on baremetal so I couldn't use an external LB in the "approved" list.

You are spot on with:

curl nodeipaddress:32710 

I want to add that you of course nodeipaddress must be the node ip where the Pod with the echo webserver is running. With Nodeport port 32710 is exposed, but this is bypassing the ingress controller. In fact, I can't see the connection the logs of the ingress controller, but is of course showing in the logs of the Pod:

kubectl logs source-ip-app-75dbbff4f-hg97m
10.0.0.165 - - [16/Mar/2023:12:46:23 +0000] "GET / HTTP/1.1" 200 388 "-" "curl/7.74.0"

The IP of the node I'm curl-ing from is preserved, is 10.0.0.165.

Unfortunately I don't want to bypass the ingress controller, as it is doing TLS termination for me, and I would like to keep it that way.

Do you have any suggestion? I can ditch Haproxy entirely if necessary. I must use Haproxy because I don't have access to another external load balancer like in GCP or Azure etc.

About hairpin connections, I usually have ClusterIP in the ingress controller, but that way, I lose the ability to set ExternalTrafficPolicy to Local.

longwuyuan commented 1 year ago

Everybody (almost everybody) just gets on with life by just installing Metallb and deploying apps with a service type clusterIP. Then they install ingress-nginx controller with service type LoadBalancer. Then they create ingress resources that use the app service of type clusterIP as backend.

With above config, you can use externalTrafficPolicy: Local or enable proxy-protocol to get client's real ipaddress.

I don't know any other way

venutol commented 1 year ago

I too want to get on with life :( So metallb assigns IPs from a pool of local IPs to the nginx controller instances, correct?

How do I then get traffic from my single Ipv4 public ip to these nginx ingress controller instances then?

At the moment with haproxy I can LoadBalance across all my 3 worker nodes with:

backend kuber_backend_https
    mode tcp
    option log-health-checks
    option redispatch
    balance roundrobin
    timeout connect 10s
    timeout server 1m   
    server kuber2 10.0.0.166:443 check send-proxy
    server kuber3 10.0.0.167:443 check send-proxy
    server kuber4 10.0.0.163:443 check send-proxy

I understand this is outside of the question topic. I'll research a bit more for a solution, if possible I would like to keep using my existing infrastructure.

gaui commented 1 year ago

I'm interested in knowing why externalTrafficPolicy: Cluster is not working.

$ nmap -Pn -p 80,443 10.97.1.254
Starting Nmap 7.80 ( https://nmap.org ) at 2023-03-16 14:36 GMT
Nmap scan report for 10.97.1.254
Host is up.

PORT    STATE    SERVICE
80/tcp  filtered http
443/tcp filtered https

Nmap done: 1 IP address (1 host up) scanned in 3.07 seconds

But when I change to externalTrafficPolicy: Local it works:

$ nmap -Pn -p 80,443 10.97.1.254
Starting Nmap 7.80 ( https://nmap.org ) at 2023-03-16 14:41 GMT
Nmap scan report for 10.97.1.254
Host is up (0.058s latency).

PORT    STATE    SERVICE
80/tcp  open     http
443/tcp open    https

Nmap done: 1 IP address (1 host up) scanned in 1.67 seconds
gaui commented 1 year ago

And why does the YAML manifests differ from the Helm chart in that regard? Cluster is default value in Kubernetes.

YAML manifest sets externalTrafficPolicy: Local https://github.com/kubernetes/ingress-nginx/blob/helm-chart-4.5.2/deploy/static/provider/cloud/deploy.yaml#L347

Helm chart default values doesn't override and uses the Kubernetes default externalTrafficPolicy: Cluster https://github.com/kubernetes/ingress-nginx/blob/helm-chart-4.5.2/charts/ingress-nginx/values.yaml#L431

venutol commented 1 year ago

Everybody (almost everybody) just gets on with life by just installing Metallb and deploying apps with a service type clusterIP. Then they install ingress-nginx controller with service type LoadBalancer. Then they create ingress resources that use the app service of type clusterIP as backend.

With above config, you can use externalTrafficPolicy: Local or enable proxy-protocol to get client's real ipaddress.

I don't know any other way

I understand your proposed solution. I think this is the only way to get it working on baremetal without relying on NodePorts. I only see one issue, referencing the manual again: https://kubernetes.io/docs/tutorials/services/source-ip/

If you set service.spec.externalTrafficPolicy to the value Local, kube-proxy only proxies proxy requests to local endpoints, and does not forward traffic to other nodes. This approach preserves the original source IP address. If there are no local endpoints, packets sent to the node are dropped, so you can rely on the correct source-ip in any packet processing rules you might apply a packet that make it through to the endpoint

This means that Metallb assigns an IP to the nginx-ingress controller service, and I can forward TCP 443 and 80 on that IP. The problem is that if traffic gets to an ingress controller which is in another node, different from the echo webserver, the request will be dropped.

The manual references GCP, do we know this works with metallb as well, does it respect health checks?

However, if you're running on Google Kubernetes Engine/GCE, setting the same service.spec.externalTrafficPolicy field to Local forces nodes without Service endpoints to remove themselves from the list of nodes eligible for loadbalanced traffic by deliberately failing health checks.

How does everyone solve this problem?

Edit :+1: https://metallb.universe.tf/usage/ “Local” traffic policy With the Local traffic policy, kube-proxy on the node that received the traffic sends it only to the service’s pod(s) that are on the same node. There is no “horizontal” traffic flow between nodes.

So what is the point of loadbalancer then?

github-actions[bot] commented 1 year ago

This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach #ingress-nginx-dev on Kubernetes Slack.

strongjz commented 1 year ago

I'm not sure if the issue was resolved here.

@venutol were you able to get your set up working?

venutol commented 1 year ago

Hi @strongjz I had to give up and rely on the http headers. I suppose I could test this setup on a Kaas platform and report back.

dscaravaggi commented 1 year ago

I hope that somebody will solve this, even if it is not a Show stopper, and until K(S is behiand a L7 haproxy I got haeders, ... but if I will shift to kube-vip balancer ...

rootsongjc commented 9 months ago

I noticed that many of you have been inquiring about preserving client source IP in Istio. I recently wrote a blog post titled Maintaining Traffic Transparency: Preserving Client Source IP in Istio that addresses this topic in detail. I hope you find it helpful in solving your source IP preservation challenges. Feel free to check it out and let me know if you have any questions or feedback.

iammilan07 commented 3 months ago

if your application pod behind nginx-ingress-controller pod are deployed using nginx you can do this to preserver the remote client IP

add this inside nginx.conf file

set_real_ip_from 0.0.0.0/0; # Trust all IPs (use your VPC CIDR block in production) real_ip_header X-Forwarded-For; real_ip_recursive on;

log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                '$status $body_bytes_sent "$http_referer" '
                '"$http_user_agent" '
                'host=$host x-forwarded-for=$http_x_forwarded_for';

access_log /var/log/nginx/access.log main;

also in ingress.yaml add this
nginx.ingress.kubernetes.io/use-forwarded-headers:"true"
longwuyuan commented 1 month ago

The post above this message explains a solution.

But on a different note, there are many users on cloud using cloud LB and bare-metal using Metallb. This is relevant because the provisioning of the LB in front of the ingress-nginx created service of --type LoadBalancer is a tested & recommended use.

The code for provisioning that LB is not in the ingress-nginx controller. And the config of that LB like HAproxy also is not in the controller. Hence what client info comes across the hop to the controller is out of scope of this project.

There is no action-item here for the project but users may continue to discuss as more info becomes available. However we can not keep this issue open because it is adding to the tally of open issues without any action item for the project. There is shortage of resources and all focus is on security, Gateway-API type of priorities. We have even deprecated some popular features as we can not maintain & support them. As such there is no work to be done here by the project. So I am closing this issue.

/close

k8s-ci-robot commented 1 month ago

@longwuyuan: Closing this issue.

In response to [this](https://github.com/kubernetes/ingress-nginx/issues/9749#issuecomment-2343326312): >The post above this message explains a solution. > >But on a different note, there are many users on cloud using cloud LB and bare-metal using Metallb. This is relevant because the provisioning of the LB in front of the ingress-nginx created service of --type LoadBalancer is a tested & recommended use. > >The code for provisioning that LB is not in the ingress-nginx controller. And the config of that LB like HAproxy also is not in the controller. Hence what client info comes across the hop to the controller is out of scope of this project. > >There is no action-item here for the project but users may continue to discuss as more info becomes available. However we can not keep this issue open because it is adding to the tally of open issues without any action item for the project. There is shortage of resources and all focus is on security, Gateway-API type of priorities. We have even deprecated some popular features as we can not maintain & support them. As such there is no work to be done here by the project. So I am closing this issue. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
ARAldhafeeri commented 1 month ago

Sorry, but the issue does not provide a detailed explanation of the solution. I spent hours figuring this out, so I wanted to share the steps that worked for me. Whether you're using an NGINX controller on a cloud provider or bare metal, the solution is similar, with minor differences in implementation.

The Problem:

The NGINX ingress controller doesn't parse the proxy headers because the proxy protocol is likely disabled by default.

The Solution:

You need to enable the proxy protocol in your NGINX ingress controller. Here’s a step-by-step guide:

  1. Edit the ConfigMap: First, modify the NGINX ingress controller's ConfigMap to enable the proxy protocol.

    Run the following command:

    kubectl edit configmap -n <namespace> <service-name>

    Then, add or modify the following line in the data section:

    data:
     use-proxy-protocol: "true"
  2. Enable Proxy Protocol in the Service: Next, you need to enable the proxy protocol in the service that exposes the NGINX ingress.

    Run the following command:

    kubectl edit service -n <namespace> <service-name>

    Add the following annotation: " note here annotaiton might be different based on your cloud provider :

    annotations:
     service.beta.kubernetes.io/loadbalancer-enable-proxy-protocol: "true"
  3. Restart the NGINX Ingress Controller Deployment: Finally, you need to roll out a restart of the NGINX ingress controller deployment for the changes to take effect.

    Run this command:

    kubectl rollout restart deployment ingress-nginx-controller -n ingress-nginx

After completing these steps, everything should work as expected!"

All annotation that depends on proxy protocol works like magic, e.g. : nginx.ingress.kubernetes.io/limit-rps nginx.ingress.kubernetes.io/whitelist-source-range