kubernetes / ingress-nginx

Ingress NGINX Controller for Kubernetes
https://kubernetes.github.io/ingress-nginx/
Apache License 2.0
17.53k stars 8.26k forks source link

nginx ingress - tcp services source ip not preserved #11268

Closed mvrk69 closed 6 months ago

mvrk69 commented 7 months ago

What happened:

Hi,

I have a pod with rsyslog running as a central logging system, and i need that the logs that arrive at my rsyslog pod from external network arrive with the original source ip address, but i have not been able to make it work with nginx ingress.

I've set the ingress-nginx-controller service externalTrafficPolicy="Local" as explained all over the internet and in the docs.

Example I have a VM with ip 192.168.0.6 which is sending logs to my rsyslog pod service (syslog.apps.k8s.azar.pt - 192.168.0.115) but the logs arrive with ip 10.32.80.24 which is the ip of the ingress-nginx-controller instead of 192.168.0.6.

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):

-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.10.0
  Build:         71f78d49f0a496c31d4c19f095469f3f23900f8a
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.25.3

-------------------------------------------------------------------------------

Kubernetes version (use kubectl version):

Client Version: v1.27.11
Kustomize Version: v5.0.1
Server Version: v1.27.11

Environment:

kubeadm init --config kubeadm-config.yml --upload-certs

kubectl describe cm kubeadm-config -n kube-system

Name:         kubeadm-config
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>

Data
====
ClusterConfiguration:
----
apiServer:
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: k8sm01.azar.pt:6443
controllerManager:
  extraArgs:
    allocate-node-cidrs: "true"
    flex-volume-plugin-dir: /etc/kubernetes/kubelet-plugins/volume/exec
    node-cidr-mask-size: "20"
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: v1.27.11
networking:
  dnsDomain: cluster.local
  podSubnet: 10.32.0.0/16
  serviceSubnet: 172.16.16.0/22
scheduler: {}

BinaryData
====

Events:  <none>
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update 
helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace --set controller.service.externalIPs="{192.168.0.115}" --set controller.service.externalTrafficPolicy="Local" --set controller.extraArgs.enable-ssl-passthrough="" --set controller.extraArgs.tcp-services-configmap="\$\(POD_NAMESPACE\)/tcp-services"
kubectl patch svc ingress-nginx-controller -n ingress-nginx --type='json' -p='[{"op": "add", "path": "/spec/ports/-", "value": {"appProtocol":"tcp","name":"syslog","nodePort":30514,"port":514,"protocol":"TCP","targetPort":514}}]'
kubectl patch svc ingress-nginx-controller -n ingress-nginx --type='json' -p='[{"op": "add", "path": "/spec/ports/-", "value": {"appProtocol":"tcp","name":"syslog-tls","nodePort":31514,"port":6514,"protocol":"TCP","targetPort":6514}}]'
kubectl apply -f /home/core/config/nginx-tcp-services.yaml

kubectl describe cm tcp-services -n ingress-nginx

kubectl describe cm tcp-services -n ingress-nginx
Name:         tcp-services
Namespace:    ingress-nginx
Labels:       <none>
Annotations:  <none>

Data
====
6514:
----
syslog/syslog:6514
514:
----
syslog/syslog:514

BinaryData
====

Events:  <none>
Name:         nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.10.0
              helm.sh/chart=ingress-nginx-4.10.0
Annotations:  meta.helm.sh/release-name: ingress-nginx
              meta.helm.sh/release-namespace: ingress-nginx
Controller:   k8s.io/ingress-nginx
Events:       <none>
NAME                                           READY   STATUS    RESTARTS   AGE   IP            NODE     NOMINATED NODE   READINESS GATES
pod/ingress-nginx-controller-99bf68dd6-bmw2c   1/1     Running   1          74m   10.32.80.24   k8sm01   <none>           <none>

NAME                                         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                                                   AGE   SELECTOR
service/ingress-nginx-controller             LoadBalancer   172.16.18.241   192.168.0.115   80:30179/TCP,443:31480/TCP,514:30514/TCP,6514:31514/TCP   74m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-admission   ClusterIP      172.16.17.155   <none>          443/TCP                                                   74m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                                                                                                     SELECTOR
deployment.apps/ingress-nginx-controller   1/1     1            1           74m   controller   registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

NAME                                                 DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                                                                                                                     SELECTOR
replicaset.apps/ingress-nginx-controller-99bf68dd6   1         1         1       74m   controller   registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=99bf68dd6
Name:             ingress-nginx-controller-99bf68dd6-bmw2c
Namespace:        ingress-nginx
Priority:         0
Service Account:  ingress-nginx
Node:             k8sm01/192.168.0.115
Start Time:       Tue, 16 Apr 2024 14:30:08 +0200
Labels:           app.kubernetes.io/component=controller
                  app.kubernetes.io/instance=ingress-nginx
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=ingress-nginx
                  app.kubernetes.io/part-of=ingress-nginx
                  app.kubernetes.io/version=1.10.0
                  helm.sh/chart=ingress-nginx-4.10.0
                  pod-template-hash=99bf68dd6
Annotations:      cni.projectcalico.org/containerID: 0f894c8604532baa408b9e68a4bbd9c1c7dfa205efe94a002209947a20865cca
                  cni.projectcalico.org/podIP: 10.32.80.24/32
                  cni.projectcalico.org/podIPs: 10.32.80.24/32
Status:           Running
IP:               10.32.80.24
IPs:
  IP:           10.32.80.24
Controlled By:  ReplicaSet/ingress-nginx-controller-99bf68dd6
Containers:
  controller:
    Container ID:    cri-o://07b36b88d32b7501118452bb17c80f1a24ec1c6f8d16d1c5b7b5a50c524bd373
    Image:           registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c
    Image ID:        registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c
    Ports:           80/TCP, 443/TCP, 8443/TCP
    Host Ports:      0/TCP, 0/TCP, 0/TCP
    SeccompProfile:  RuntimeDefault
    Args:
      /nginx-ingress-controller
      --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
      --election-id=ingress-nginx-leader
      --controller-class=k8s.io/ingress-nginx
      --ingress-class=nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
      --enable-metrics=false
      --enable-ssl-passthrough
      --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
    State:          Running
      Started:      Tue, 16 Apr 2024 14:43:48 +0200
    Ready:          True
    Restart Count:  1
    Requests:
      cpu:      100m
      memory:   90Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-controller-99bf68dd6-bmw2c (v1:metadata.name)
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hfd6r (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  kube-api-access-hfd6r:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  RELOAD  39m (x5 over 64m)  nginx-ingress-controller  NGINX reload triggered due to a change in configuration
Name:                     ingress-nginx-controller
Namespace:                ingress-nginx
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=1.10.0
                          helm.sh/chart=ingress-nginx-4.10.0
Annotations:              meta.helm.sh/release-name: ingress-nginx
                          meta.helm.sh/release-namespace: ingress-nginx
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       172.16.18.241
IPs:                      172.16.18.241
External IPs:             192.168.0.115
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  30179/TCP
Endpoints:                10.32.80.24:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  31480/TCP
Endpoints:                10.32.80.24:443
Port:                     syslog  514/TCP
TargetPort:               514/TCP
NodePort:                 syslog  30514/TCP
Endpoints:                10.32.80.24:514
Port:                     syslog-tls  6514/TCP
TargetPort:               6514/TCP
NodePort:                 syslog-tls  31514/TCP
Endpoints:                10.32.80.24:6514
Session Affinity:         None
External Traffic Policy:  Local
HealthCheck NodePort:     30282
Events:                   <none>
k8s-ci-robot commented 7 months ago

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
longwuyuan commented 7 months ago

/remove-kind bug /kind support /triage needs-information

mvrk69 commented 7 months ago

[root@syslog-5569bf47bc-bfmp5 /]# ls -l /rsyslog/data/remote/ total 4 drwx------. 2 root root 4096 Apr 16 18:56 10.32.80.53

[root@syslog-5569bf47bc-bfmp5 /]# cat /rsyslog/data/remote/10.32.80.53/messages | grep TST Apr 16 18:55:49 topgun root TST


- kubectl logs ingress-nginx-controller-99bf68dd6-bmw2c -n ingress-nginx 

NGINX Ingress controller Release: v1.10.0 Build: 71f78d49f0a496c31d4c19f095469f3f23900f8a Repository: https://github.com/kubernetes/ingress-nginx nginx version: nginx/1.25.3


W0416 16:49:52.731415 7 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0416 16:49:52.733465 7 main.go:205] "Creating API client" host="https://172.16.16.1:443" I0416 16:49:57.876143 7 main.go:249] "Running in Kubernetes cluster" major="1" minor="27" git="v1.27.11" state="clean" commit="b9e2ad67ad146db566be5a6db140d47e52c8adb2" platform="linux/amd64" I0416 16:49:58.002463 7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem" I0416 16:49:58.027607 7 ssl.go:536] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key" I0416 16:49:58.040603 7 nginx.go:265] "Starting NGINX Ingress controller" I0416 16:49:58.058707 7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"dc4b14ee-aa5f-497c-92f0-20f7ed04f2b2", APIVersion:"v1", ResourceVersion:"1423", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller I0416 16:49:58.061559 7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"302a86d4-7d18-4c18-973c-f7d3867ad005", APIVersion:"v1", ResourceVersion:"1515", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services I0416 16:49:59.144183 7 store.go:440] "Found valid IngressClass" ingress="registry/registry" ingressclass="nginx" I0416 16:49:59.144497 7 event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"registry", Name:"registry", UID:"11784a6b-0387-47f2-8b69-e5977587c92e", APIVersion:"networking.k8s.io/v1", ResourceVersion:"5321", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync I0416 16:49:59.242022 7 nginx.go:769] "Starting TLS proxy for SSL Passthrough" I0416 16:49:59.242132 7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader... I0416 16:49:59.242275 7 nginx.go:308] "Starting NGINX process" I0416 16:49:59.242970 7 nginx.go:328] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key" I0416 16:49:59.243827 7 controller.go:190] "Configuration changes detected, backend reload required" I0416 16:49:59.247809 7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader I0416 16:49:59.248046 7 status.go:84] "New leader elected" identity="ingress-nginx-controller-99bf68dd6-bmw2c" I0416 16:49:59.291847 7 controller.go:210] "Backend successfully reloaded" I0416 16:49:59.291928 7 controller.go:221] "Initial sync, sleeping for 1 second" [192.168.0.6] [16/Apr/2024:16:52:29 +0000] TCP 200 0 26418 109.097 [192.168.0.6] [16/Apr/2024:16:52:38 +0000] TCP 200 0 127 0.000 [192.168.0.6] [16/Apr/2024:16:53:34 +0000] TCP 200 0 127 0.001 [192.168.0.6] [16/Apr/2024:16:54:13 +0000] TCP 200 0 0 0.000 [192.168.0.6] [16/Apr/2024:16:54:13 +0000] TCP 200 0 0 0.001 [192.168.0.6] [16/Apr/2024:16:55:49 +0000] TCP 200 0 127 0.000



I see the packets arrive in the ingress controller with the correct ip.

So ip is lost after the ingress controller.
longwuyuan commented 7 months ago

oh ok. If I am not wrong, then using host-ip address means all bets are off and not much to be said from the project side. You can route like that or NodePort etc etc, but its not a gurantee of preserving headers or other client info that the controller can rely on.

That is a termination on that host so only you can tell how any headers and other info is preserved across that hop.

We only test loadbalancers that offer those features to preserver info across hops etc.

Hope it works out for you by some expert comments

mvrk69 commented 7 months ago

But seems the nginx controller is somehow natting the traffic, because it arrives at nginx with the correct ip 192.168.0.6 and then arrives at the pod with the ip of the nginx controller.

longwuyuan commented 7 months ago

Routing is what controller does. Preserving client info across hop is not what controller decides. Do tcpdump in controller if possible to check what info is preserved. But AFAIK, this is not what is tested in CI

On Tue, 16 Apr, 2024, 11:07 pm mvrk69, @.***> wrote:

But seems the nginx controller is somehow natting the traffic, because it arrives at nginx with the correct ip 192.168.0.6 and then arrives at the pod with the ip of the nginx controller.

— Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/11268#issuecomment-2059601213, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWRR333WKPBBOJRZARLY5VOVHAVCNFSM6AAAAABGJM7CQGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANJZGYYDCMRRGM . You are receiving this because you commented.Message ID: @.***>

longwuyuan commented 7 months ago

For what it is worth, please do tcpdump in syslog pod and check the headers received. It may tell if headers are preserved or not. If preserved then maybe X-real-ip or some such header may have the info, I am not sure because I never tested like this.

mvrk69 commented 7 months ago

Isn't x-real-ip an http header? I don't think we will find anything like that on a syslog tcp packet.

I also right now found on the nginx documentation (https://www.nginx.com/blog/tcp-load-balancing-udp-load-balancing-nginx-tips-tricks/#IpBackend) that the only way to preserve client ip for tcp/udp traffic to a destination that doesn't support proxy protocol like syslog is using nginx is with the proxy_bind transparent.

Does the nginx ingress controller for kubernetes supports that?

bmv126 commented 6 months ago

https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/

This requires efforts in k8s networking side and nginx.conf updated with proxy_bind transparent.

Setting proxy_bind transparent is not supported in ingress-nginx.

strongjz commented 6 months ago

L7 Load balancer needs to have X-Forwarded https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-forwarded-headers L4 Load balancer needs proxy-protocol https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#forwarded-for-header

mvrk69 commented 6 months ago

Thank you all for the information.

BhautikChudasama commented 3 months ago

Hey @mvrk69, how did you solve this issue?

mvrk69 commented 3 months ago

Well, depends, if you have several nodes, then for now i think there is no solution.

Though in my case as i only have one node i used NodePort to expose the rsyslog port on the node and that's it.

vutuong commented 4 hours ago

L7 Load balancer needs to have X-Forwarded https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-forwarded-headers L4 Load balancer needs proxy-protocol https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#forwarded-for-header

hi @strongjz, You sent the same link for both L7 and L4. What do you mean for L4 to keep source ip via nginx ingress controller ? I guess this is : https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-proxy-protocol Then I apply this cm

apiVersion: v1
data:
  use-proxy-protocol: "true"
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: elk
    meta.helm.sh/release-namespace: elk
  labels:
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  name: ingress-nginx
  namespace: elk

But it is not working as expected i got: k logs -f elk-ingress-nginx-controller-64bdb766ff-cl6lr

image