Closed mvrk69 closed 6 months ago
This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
kubectl describe
commandlogger
comamnd or some other command sending payload to the cluster etc/remove-kind bug /kind support /triage needs-information
proxy-protocol doesn't apply, i don't have a load balancer in front of my k8s node, i'm contacting directly the node ip address (192.168.0.115)
regarding kube-proxy also doesn't apply, i'm usingo calico with eBPF data plane (kube-proxy is not running)
[root@topgun /]# logger -n syslog.apps.k8s.azar.pt -T -P 514 TST
[root@syslog-5569bf47bc-bfmp5 /]# ls -l /rsyslog/data/remote/ total 4 drwx------. 2 root root 4096 Apr 16 18:56 10.32.80.53
[root@syslog-5569bf47bc-bfmp5 /]# cat /rsyslog/data/remote/10.32.80.53/messages | grep TST Apr 16 18:55:49 topgun root TST
- kubectl logs ingress-nginx-controller-99bf68dd6-bmw2c -n ingress-nginx
NGINX Ingress controller Release: v1.10.0 Build: 71f78d49f0a496c31d4c19f095469f3f23900f8a Repository: https://github.com/kubernetes/ingress-nginx nginx version: nginx/1.25.3
W0416 16:49:52.731415 7 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0416 16:49:52.733465 7 main.go:205] "Creating API client" host="https://172.16.16.1:443" I0416 16:49:57.876143 7 main.go:249] "Running in Kubernetes cluster" major="1" minor="27" git="v1.27.11" state="clean" commit="b9e2ad67ad146db566be5a6db140d47e52c8adb2" platform="linux/amd64" I0416 16:49:58.002463 7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem" I0416 16:49:58.027607 7 ssl.go:536] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key" I0416 16:49:58.040603 7 nginx.go:265] "Starting NGINX Ingress controller" I0416 16:49:58.058707 7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"dc4b14ee-aa5f-497c-92f0-20f7ed04f2b2", APIVersion:"v1", ResourceVersion:"1423", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller I0416 16:49:58.061559 7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"302a86d4-7d18-4c18-973c-f7d3867ad005", APIVersion:"v1", ResourceVersion:"1515", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services I0416 16:49:59.144183 7 store.go:440] "Found valid IngressClass" ingress="registry/registry" ingressclass="nginx" I0416 16:49:59.144497 7 event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"registry", Name:"registry", UID:"11784a6b-0387-47f2-8b69-e5977587c92e", APIVersion:"networking.k8s.io/v1", ResourceVersion:"5321", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync I0416 16:49:59.242022 7 nginx.go:769] "Starting TLS proxy for SSL Passthrough" I0416 16:49:59.242132 7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader... I0416 16:49:59.242275 7 nginx.go:308] "Starting NGINX process" I0416 16:49:59.242970 7 nginx.go:328] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key" I0416 16:49:59.243827 7 controller.go:190] "Configuration changes detected, backend reload required" I0416 16:49:59.247809 7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader I0416 16:49:59.248046 7 status.go:84] "New leader elected" identity="ingress-nginx-controller-99bf68dd6-bmw2c" I0416 16:49:59.291847 7 controller.go:210] "Backend successfully reloaded" I0416 16:49:59.291928 7 controller.go:221] "Initial sync, sleeping for 1 second" [192.168.0.6] [16/Apr/2024:16:52:29 +0000] TCP 200 0 26418 109.097 [192.168.0.6] [16/Apr/2024:16:52:38 +0000] TCP 200 0 127 0.000 [192.168.0.6] [16/Apr/2024:16:53:34 +0000] TCP 200 0 127 0.001 [192.168.0.6] [16/Apr/2024:16:54:13 +0000] TCP 200 0 0 0.000 [192.168.0.6] [16/Apr/2024:16:54:13 +0000] TCP 200 0 0 0.001 [192.168.0.6] [16/Apr/2024:16:55:49 +0000] TCP 200 0 127 0.000
I see the packets arrive in the ingress controller with the correct ip.
So ip is lost after the ingress controller.
oh ok. If I am not wrong, then using host-ip address means all bets are off and not much to be said from the project side. You can route like that or NodePort etc etc, but its not a gurantee of preserving headers or other client info that the controller can rely on.
That is a termination on that host so only you can tell how any headers and other info is preserved across that hop.
We only test loadbalancers that offer those features to preserver info across hops etc.
Hope it works out for you by some expert comments
But seems the nginx controller is somehow natting the traffic, because it arrives at nginx with the correct ip 192.168.0.6 and then arrives at the pod with the ip of the nginx controller.
Routing is what controller does. Preserving client info across hop is not what controller decides. Do tcpdump in controller if possible to check what info is preserved. But AFAIK, this is not what is tested in CI
On Tue, 16 Apr, 2024, 11:07 pm mvrk69, @.***> wrote:
But seems the nginx controller is somehow natting the traffic, because it arrives at nginx with the correct ip 192.168.0.6 and then arrives at the pod with the ip of the nginx controller.
— Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/11268#issuecomment-2059601213, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWRR333WKPBBOJRZARLY5VOVHAVCNFSM6AAAAABGJM7CQGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANJZGYYDCMRRGM . You are receiving this because you commented.Message ID: @.***>
For what it is worth, please do tcpdump in syslog pod and check the headers received. It may tell if headers are preserved or not. If preserved then maybe X-real-ip or some such header may have the info, I am not sure because I never tested like this.
Isn't x-real-ip an http header? I don't think we will find anything like that on a syslog tcp packet.
I also right now found on the nginx documentation (https://www.nginx.com/blog/tcp-load-balancing-udp-load-balancing-nginx-tips-tricks/#IpBackend) that the only way to preserve client ip for tcp/udp traffic to a destination that doesn't support proxy protocol like syslog is using nginx is with the proxy_bind transparent.
Does the nginx ingress controller for kubernetes supports that?
https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/
This requires efforts in k8s networking side and nginx.conf updated with proxy_bind transparent.
Setting proxy_bind transparent
is not supported in ingress-nginx.
L7 Load balancer needs to have X-Forwarded https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-forwarded-headers L4 Load balancer needs proxy-protocol https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#forwarded-for-header
Thank you all for the information.
Hey @mvrk69, how did you solve this issue?
Well, depends, if you have several nodes, then for now i think there is no solution.
Though in my case as i only have one node i used NodePort to expose the rsyslog port on the node and that's it.
L7 Load balancer needs to have X-Forwarded https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-forwarded-headers L4 Load balancer needs proxy-protocol https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#forwarded-for-header
hi @strongjz, You sent the same link for both L7 and L4. What do you mean for L4 to keep source ip via nginx ingress controller ? I guess this is : https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-proxy-protocol Then I apply this cm
apiVersion: v1
data:
use-proxy-protocol: "true"
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: elk
meta.helm.sh/release-namespace: elk
labels:
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx
namespace: elk
But it is not working as expected i got: k logs -f elk-ingress-nginx-controller-64bdb766ff-cl6lr
What happened:
Hi,
I have a pod with rsyslog running as a central logging system, and i need that the logs that arrive at my rsyslog pod from external network arrive with the original source ip address, but i have not been able to make it work with nginx ingress.
I've set the ingress-nginx-controller service externalTrafficPolicy="Local" as explained all over the internet and in the docs.
Example I have a VM with ip 192.168.0.6 which is sending logs to my rsyslog pod service (syslog.apps.k8s.azar.pt - 192.168.0.115) but the logs arrive with ip 10.32.80.24 which is the ip of the ingress-nginx-controller instead of 192.168.0.6.
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
Kubernetes version (use
kubectl version
):Environment:
Cloud provider or hardware configuration: KVM VM
OS (e.g. from /etc/os-release): Fedora CoreOS 39.20240322.3.1
Kernel (e.g.
uname -a
):Install tools:
kubectl describe cm kubeadm-config -n kube-system
Basic cluster related info:
kubectl get nodes -o wide
How was the ingress-nginx-controller installed:
kubectl describe cm tcp-services -n ingress-nginx
If helm was used then please show output of
helm ls -A | grep -i ingress
If helm was used then please show output of
helm -n <ingresscontrollernamespace> get values <helmreleasename>
kubectl describe ingressclasses
kubectl -n <ingresscontrollernamespace> get all -A -o wide
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>