Closed Azbesciak closed 2 months ago
This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
/remove-kind bug
show the output of
@longwuyuan
kubectl get svc,ing -A -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 101d <none>
kube-system service/metrics-server ClusterIP 10.152.183.127 <none> 443/TCP 101d k8s-app=metrics-server
kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.17 <none> 8000/TCP 101d k8s-app=dashboard-metrics-scraper
kube-system service/kubernetes-dashboard NodePort 10.152.183.93 <none> 443:32741/TCP 101d k8s-app=kubernetes-dashboard
observability service/kube-prom-stack-kube-prome-prometheus ClusterIP 10.152.183.121 <none> 9090/TCP 53d app.kubernetes.io/name=prometheus,prometheus=kube-prom-stack-kube-prome-prometheus
kube-system service/kube-prom-stack-kube-prome-kube-etcd ClusterIP None <none> 2381/TCP 53d component=etcd
kube-system service/kube-prom-stack-kube-prome-kube-scheduler ClusterIP None <none> 10259/TCP 53d <none>
kube-system service/kube-prom-stack-kube-prome-kube-proxy ClusterIP None <none> 10249/TCP 53d k8s-app=kube-proxy
kube-system service/kube-prom-stack-kube-prome-kube-controller-manager ClusterIP None <none> 10257/TCP 53d <none>
kube-system service/kube-prom-stack-kube-prome-coredns ClusterIP None <none> 9153/TCP 53d k8s-app=kube-dns
observability service/kube-prom-stack-grafana ClusterIP 10.152.183.102 <none> 80/TCP 53d app.kubernetes.io/instance=kube-prom-stack,app.kubernetes.io/name=grafana
observability service/kube-prom-stack-kube-state-metrics ClusterIP 10.152.183.71 <none> 8080/TCP 53d app.kubernetes.io/instance=kube-prom-stack,app.kubernetes.io/name=kube-state-metrics
observability service/kube-prom-stack-prometheus-node-exporter ClusterIP 10.152.183.151 <none> 9100/TCP 53d app.kubernetes.io/instance=kube-prom-stack,app.kubernetes.io/name=prometheus-node-exporter
observability service/kube-prom-stack-kube-prome-alertmanager ClusterIP 10.152.183.190 <none> 9093/TCP 53d alertmanager=kube-prom-stack-kube-prome-alertmanager,app.kubernetes.io/name=alertmanager
observability service/kube-prom-stack-kube-prome-operator ClusterIP 10.152.183.164 <none> 443/TCP 53d app=kube-prometheus-stack-operator,release=kube-prom-stack
kube-system service/kube-prom-stack-kube-prome-kubelet ClusterIP None <none> 10250/TCP,10255/TCP,4194/TCP 53d <none>
observability service/alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 53d app.kubernetes.io/name=alertmanager
observability service/prometheus-operated ClusterIP None <none> 9090/TCP 53d app.kubernetes.io/name=prometheus
observability service/loki-memberlist ClusterIP None <none> 7946/TCP 53d app=loki,release=loki
observability service/loki-headless ClusterIP None <none> 3100/TCP 53d app=loki,release=loki
observability service/loki ClusterIP 10.152.183.154 <none> 3100/TCP 53d app=loki,release=loki
observability service/tempo ClusterIP 10.152.183.203 <none> 3100/TCP,16687/TCP,16686/TCP,6831/UDP,6832/UDP,14268/TCP,14250/TCP,9411/TCP,55680/TCP,55681/TCP,4317/TCP,4318/TCP,55678/TCP 53d app.kubernetes.io/instance=tempo,app.kubernetes.io/name=tempo
observability service/nfs-server ClusterIP 10.152.183.182 <none> 2049/TCP,20048/TCP,111/TCP 53d io.kompose.service=nfs-server
kube-system service/kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 20d k8s-app=kube-dns
test service/nfs-server ClusterIP 10.152.183.183 <none> 2049/TCP,20048/TCP,111/TCP 33h io.kompose.service=nfs-server
test service/redis ClusterIP None <none> 6379/TCP,16379/TCP 33h io.kompose.service=redis
test service/mongodb ClusterIP None <none> 27017/TCP 33h io.kompose.service=mongodb
test service/app-manager ClusterIP 10.152.183.228 <none> 80/TCP,443/TCP 33h io.kompose.service=app-manager
test service/auth-service ClusterIP 10.152.183.64 <none> 8082/TCP 33h io.kompose.service=auth-service
test service/geocode-service ClusterIP 10.152.183.213 <none> 8083/TCP 33h io.kompose.service=geocode-service
test service/layers-service ClusterIP 10.152.183.86 <none> 8084/TCP 33h io.kompose.service=layers-service
test service/route-service ClusterIP 10.152.183.44 <none> 8081/TCP,5005/TCP 33h io.kompose.service=route-service
test service/map-view ClusterIP 10.152.183.80 <none> 80/TCP,443/TCP 33h io.kompose.service=map-view
test service/api-gateway ClusterIP 10.152.183.73 <none> 8080/TCP 33h io.kompose.service=api-gateway
ingress-nginx service/ingress-nginx-controller-admission ClusterIP 10.152.183.236 <none> 443/TCP 8m37s app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
ingress-nginx service/ingress-nginx-controller LoadBalancer 10.152.183.36 <pending> 8000:32043/TCP,4430:31224/TCP,8888:31063/TCP,9000:32164/TCP,3000:31801/TCP 8m37s app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
kubectl describe svc -n $ingresscontrollernamespace
Name: ingress-nginx-controller-admission
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=0.1.0
helm.sh/chart=ingress-0.1.0
Annotations: meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: ingress-nginx
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.152.183.236
IPs: 10.152.183.236
Port: https-webhook 443/TCP
TargetPort: webhook/TCP
Endpoints: 10.1.38.88:8443
Session Affinity: None
Events: <none>
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=0.1.0
helm.sh/chart=ingress-0.1.0
Annotations: meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: ingress-nginx
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.152.183.36
IPs: 10.152.183.36
Port: http-map-p 8000/TCP
TargetPort: 8000/TCP
NodePort: http-map-p 32043/TCP
Endpoints: 10.1.38.88:8000
Port: https-map-p 4430/TCP
TargetPort: 4430/TCP
NodePort: https-map-p 31224/TCP
Endpoints: 10.1.38.88:4430
Port: app-mng-map-p 8888/TCP
TargetPort: 8888/TCP
NodePort: app-mng-map-p 31063/TCP
Endpoints: 10.1.38.88:8888
Port: dashboard 9000/TCP
TargetPort: 9000/TCP
NodePort: dashboard 32164/TCP
Endpoints: 10.1.38.88:9000
Port: grafana 3000/TCP
TargetPort: 3000/TCP
NodePort: grafana 31801/TCP
Endpoints: 10.1.38.88:3000
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 30441
Events: <none>
disable/... -> I did it now, I even removed the whole deployment and deployed it again. Did not work. I started with that, I made every permutation (on/off) with options I described. And I had externalTrafficPolicy: Local
and type: LoadBalancer
as you maybe saw in the attached helm.
No, I do not have metalib. Microk8s status below - as I mentioned ingress is 100% based on attached helm.
addons:
enabled:
dashboard # (core) The Kubernetes dashboard
dns # (core) CoreDNS
ha-cluster # (core) Configure high availability on the current node
helm # (core) Helm - the package manager for Kubernetes
helm3 # (core) Helm 3 - the package manager for Kubernetes
hostpath-storage # (core) Storage class; allocates storage from host directory
metrics-server # (core) K8s Metrics Server for API access to service metrics
observability # (core) A lightweight observability stack for logs, traces and metrics
storage # (core) Alias to hostpath-storage add-on, deprecated
disabled:
cert-manager # (core) Cloud native certificate management
community # (core) The community addons repository
gpu # (core) Automatic enablement of Nvidia CUDA
host-access # (core) Allow Pods connecting to Host services smoothly
ingress # (core) Ingress controller for external access
kube-ovn # (core) An advanced network fabric for Kubernetes
mayastor # (core) OpenEBS MayaStor
metallb # (core) Loadbalancer for your Kubernetes cluster
prometheus # (core) Prometheus operator for monitoring and logging
rbac # (core) Role-Based Access Control for authorisation
registry # (core) Private image registry exposed on localhost:32000
And as I mentioned that IP comes from the ingress controller - please see the image I attached, it is inside it. Log on the left is from my app.
I also changed it back to NodePort
ingress-nginx service/ingress-nginx-controller NodePort 10.152.183.96 <none> 8000:30982/TCP,4430:32359/TCP,8888:30386/TCP,9000:30640/TCP,3000:30137/TCP 25s app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
No difference
BTW that is nginx logging pattern, into mentioned service
log_format access '$remote_addr - $remote_user [$time_local] '
'"$request" $status $bytes_sent '
'"$http_referer" "$http_user_agent"';
The ingress-controller status is pending so none of your curl/test data is valid. Please fix that and then test
service/ingress-nginx-controller LoadBalancer 10.152.183.36 <pending>
@longwuyuan I did it just after I responded, see https://github.com/kubernetes/ingress-nginx/issues/9685#issuecomment-1453928565
I changed to NodePort
, no difference (BUT IT IS NOT PENDING, it just does not give me the client IP). With LoadBalancer
it will never be ready.
and according to https://stackoverflow.com/a/44112285/9658307 that is all what I can do, I can assign IP by myself - as I did also:
ingress-nginx service/ingress-nginx-controller LoadBalancer 10.152.183.20 10.20.18.30 8000:31741/TCP,4430:31558/TCP,8888:30962/TCP,9000:32123/TCP,3000:31446/TCP 90s app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
And it also did not change anything (I undeployed the whole chart, waited some time, and deployed again so there were nothing like grace period in work - the service was reachable from outside; IP is still invalid).
I have literally that config
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
{{- include "ingress.labels" . | nindent 4 }}
name: ingress-nginx-controller
namespace: {{ .Release.Namespace }}
spec:
externalTrafficPolicy: Local
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
# I manage 80/443 via tcp services, default 80/443 is overridden in controller to 7998/7999, that service I mention operates on 8000
{{- range .Values.endpoints }}
- port: {{ .port }}
targetPort: {{ .port }}
name: {{ .name }}
protocol: {{ .protocol }}
{{- end }}
selector:
app.kubernetes.io/component: controller
{{- include "ingress.selectorLabels" . | nindent 4 }}
type: LoadBalancer
externalIPs:
- 10.20.18.30
and config map for it (deployed runtime version)
kind: ConfigMap
apiVersion: v1
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
uid: f37f2643-0d6f-4248-b66e-0567f222aa31
resourceVersion: '17375231'
creationTimestamp: '2023-03-04T04:34:26Z'
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 0.1.0
helm.sh/chart: ingress-0.1.0
annotations:
meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: ingress-nginx
managedFields:
- manager: helm
operation: Update
apiVersion: v1
time: '2023-03-04T04:34:26Z'
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:allow-snippet-annotations: {}
f:metadata:
f:annotations:
.: {}
f:meta.helm.sh/release-name: {}
f:meta.helm.sh/release-namespace: {}
f:labels:
.: {}
f:app.kubernetes.io/component: {}
f:app.kubernetes.io/instance: {}
f:app.kubernetes.io/managed-by: {}
f:app.kubernetes.io/name: {}
f:app.kubernetes.io/part-of: {}
f:app.kubernetes.io/version: {}
f:helm.sh/chart: {}
data:
allow-snippet-annotations: 'true'
As I mentioned I have just bare metal server where I installed kubernetes and deployed ingress nginx.
@Azbesciak I think you have provided information as per your thought process and your own convenience. What is needed here actually though, is the information as per relevance of the issue you are reporting. So please see the new issue template and answer those questions in your original message post.
The information that is related to client-ip is as follows to begin with ;
ip a
command of linux on command prompt of clientYou can delete the other informaiton from this issue as that has no relevance to the issue. Also you need to factor that layer 7 inspection, of the headers in he client's request, containing client-ip address, will not happen, for the TCP/UDP port, that has been exposed in the service type LoadBalancer, via the config for this project's ingress-nginx controller
@longwuyuan
I added a new section in the initial ticket (Update with request tracing). I added - not replaced - because IMO the previous info might be usable; please read it all carefully.
BTW that initial info was the same as expected in the template, I just removed the last section because I the whole helm deployment is based on your deployment (mentioned), I also gave the whole config and helm chart itself. And it also looked fine.
I want to also notice that that app in total was migrated from docker compose, and it has the same architecture except that there is a k8s between. In docker-compose everything worked fine - I was able to see client IPs (I mean that we did not change anything outside).
Also you mentioned
Also you need to factor that layer 7 inspection, of the headers in he client's request, containing client-ip address, will not happen, for the TCP/UDP port, that has been exposed in the service type LoadBalancer, via the config for this project's ingress-nginx controller
can you elaborate on that? Please notice that I also changed the service type to NodePort
(it is not in the logs above, but I did as I mentioned) and it made no difference.
Thank you for your time and support.
@Azbesciak after reading all the content here, my opinion is that you are repeatedly providing information and updates that is your opinion and point of view, and you are paying less attention to the details of the request for information and also you are paying less attention to the related info for triaging this issue. You could be trying to help sincerely but somehow I am not able to make the progress that I wish I could or I think I can. I am not an expert but I can surely help triaging this.
I have experiences some odd error messages while testing this on release v1.6.4 of the controller. So I was hoping to get on the same page with you but its not happening. Here are some significant observations ;
kubectl get svc,ing -A -o wide
so why are you sending a curl request ???And I have just now tested the controller on minikube and I can get the real client ip address in the logs of the controller so there is no problem to be solved in the controller code, related to getting the real client ip address
@longwuyuan I added it. I provided every command you expected. With comments.
And I have just now tested the controller on minikube and I can get the real client ip address in the logs of the controller so there is no problem to be solved in the controller code, related to getting the real client ip address
So why the controller receives the client IP, but on my app side I see controller's internal ip?
I don't see any ingress object in the output of kubectl get svc,ing -A -o wide so why are you sending a curl request ???
and I also told you that my app is working. I get my expected message. My production app, not example "ok" or something.
@longwuyuan
Ok, let us approach this from other side.
Why do you think that there is no issue on the controller side? I get the exact controller IP in my app.
Look, below is ifconfig
executed inside ingress-nginx-controller
now, look into my app logs
No surprice when I - inside ingress-nginx-controller
invoke curl 0.0.0.0:8000
it is also in the app log, under the same IP.
What is the real complete URL you are using to access your app ?
http://10.20.18.30:8000
- this is our test server, but the production one has the same issue (on prod it is on 80/443).
The whole app is behind VPN.
API is behind /api/1
path. It does not matter, on /
and any other path index.html
is returned.
And the whole traffic on given port is redirected to the app. So no matter if it is http://10.20.18.30:8000
or http://10.20.18.30:8000/my/favourite/path
or something.
where is the ipaddress 10.20.18.30 ?
Yes, we are in the private network. But this is a separate server, not my laptop or something.
well I hope someone can solve your issue. I am not getting the answer to a simple question like "where is the ipaddress". I mean I really would like to understand where is the ipaddress because you mentioned you have the controller listening on nodePort so I expected that you need to have the ipaddress of the node+nodePort in your URL.
On a completely different note, I think you should get on the K8S slack and discuss this it there as there are more people there. nodePort is never a good choice for real use
The interface on which you terminate your connection needs to be capable working with a Layer7 process that can understand proxy-protocol and forward the headers to the upstream. In case of cloud environments like AWS etc, the service-provider offers configurable parameters to enable the proxy-protocol attributes like perserving the real-client-ip-address while forwarding traffic to the upstream.
So I did not get you, sorry. I thought about geo location.
That IP address belongs to the main cluster machine, it is totally hosted in our servers (no AWS, Azure or something). And this one is also the entry point to the cluster, there is no load balancer or other endpoint uppon that. Our cluster contains 2 machines, the controller is hosted on this one. The app also is there.
configurable parameters to enable the proxy-protocol attributes like perserving the real-client-ip-address while forwarding traffic to the upstream.
Ok, but... since
ingress-nginx-controller
does not...?
so my natural understanding is that the controller does not pass it.And when I see the internal nginx config inside that controller
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-test-map-view-80";
}
listen 8000;
listen [::]:8000;
proxy_timeout 600s;
proxy_next_upstream on;
proxy_next_upstream_timeout 600s;
proxy_next_upstream_tries 3;
proxy_pass upstream_balancer;
}
I suppose the problem is there. I know that headers are into the main section, but maybe something does not work there?
It seems to me that you are not using the documented and supported install https://kubernetes.github.io/ingress-nginx/deploy/#microk8s
I don't see data here that points to a problem in the controller. I do see data here that something you want to achieve is not happening. Since you are not following either this project's documented install or the microk8s documentation, I am not sure what are the next steps. I hope there are other users out there who are doing the same thing you are doing and have already solved the problem you are trying to solve. _ hope they help you.
@longwuyuan Thank you for your help. Yes, I do not have default microk8s ingress installation - but why does it make a difference there...? From my point of view there is no difference; Ok - I do not have any ingress, but it would only allow me to route traffic to 80 port, whereas all my apps are on other ports - so it would be useless. Whole other configuration is like it is described there.
And btw, the installation I have comes from your repo - as also mentioned. https://github.com/kubernetes/ingress-nginx/blob/main/deploy/static/provider/baremetal/deploy.yaml
@longwuyuan BTW I found the same issue, from 8 April 2021. https://github.com/kubernetes/ingress-nginx/issues/7022
You were also included there
The problem here is we are unable to make a data founded discussion about the headers.
On Sat, 4 Mar, 2023, 7:13 pm Witold Kupś, @.***> wrote:
@longwuyuan https://github.com/longwuyuan BTW I found the same issue, from 8 April 2021.
7022 https://github.com/kubernetes/ingress-nginx/issues/7022
You were also included there
— Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/9685#issuecomment-1454744524, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWTUHMK7LIM67MTPBE3W2NBGRANCNFSM6AAAAAAVOGHTWE . You are receiving this because you were mentioned.Message ID: @.***>
I added in my nginx app printing of X-Forwarded-For
and X-Real-Ip
headers.
like
[upstream_http_x_forwarded_for=$upstream_http_x_forwarded_for upstream_http_x_real_ip=$upstream_http_x_real_ip http_x_real_ip=$http_x_real_ip http_x_forwarded_for=$http_x_forwarded_for]
I know that only http_
makes sense there, but for being sure.
All 4 are empty
[upstream_http_x_forwarded_for=- upstream_http_x_real_ip=- http_x_real_ip=- http_x_forwarded_for=-] 10.1.38.69
/retitle proxy-protocol without LoadBalancer on microk8s /kind support
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach #ingress-nginx-dev
on Kubernetes Slack.
I managed to fix this issue in my Microk8s v1.30 Kubernetes cluster where The NGINX ingress controller is installed using the Microk8s ingress addon.
To do so, I edited the nginx-load-balancer-microk8s-conf
configmap in the ingress
namespace and added the following:
data:
enable-real-ip: "true"
Project is deprecating TCP/UDP forwarding https://github.com/kubernetes/ingress-nginx/issues/11666 so there is no action item to be tracked in this issue. Hence closing the issue.
/close
@longwuyuan: Closing this issue.
What happened: I am exposing my services via tcp config map, not like the normal way the ingress does on 80 (although also one service is on it, but with the default 80/433 bypased to 7998 and 7999) -- all mapping is through it. I need to retrieve my client IP.
I have the following config in the controller's config map
The controller's service is of type
LoadBalancer
, hasexternalTrafficPolicy: Local
; in general everything works, I can access grafana on 3000, kubernetes dashboard on 9000, my services on my desired ports. It is fine.What is not I cannot, even with the above config, retrieve my client IP.
I checked the nginx config inside the controller, find it attached - sorry for the extension, github does not support conf [ingress-ngnix.txt]; most interesting fragment below
for grafana - as you see proxy config is not complete in compare to section in direct
http.server.listen
.I see in the
http.server.listen
that there is redirect, but as mentioned I still get invalid IP as a client IP. Instead of it I get internal controller's IP (10.1.38.72 for example)What you expected to happen: I want to see my client IP
I also checked with
v1.5.1
, no differenceKubernetes version (use
kubectl version
):Environment:
Cloud provider or hardware configuration: bare metal
OS (e.g. from /etc/os-release):
Kernel (e.g.
uname -a
):Install tools:
Basic cluster related info:
kubectl version
kubectl get nodes -o wide
(ingress as you see is pinned to
maptest01
)How to reproduce this issue:
I suppose microk8s does not make problem there, you have the whole helm chart attached. My service which expect the client IP is also another nginx (web application, that one serves static files), but as mentioned I get there the controller internal IP, and it also changes when I restart the controller. (I also checked
enable-real-ip
, no diff except it was set to 0.0.0.0 in thestream.server
).Anything else we need to know: I checked out for example https://github.com/kubernetes/ingress-nginx/issues/6163 or https://github.com/kubernetes/ingress-nginx/issues/6136 ( and
config map doc
) - no help.If it is not a bug, please excuse me and give me some hints on how to solve that. I cannot change the way I use these TCP services.
Update with request tracing
Windows'
ifconfig
relevant part (I am connected via VPN, but there were no problems with docker-compose solution that way, and nothing changed since that time in our architecture, except that we replaced docker-compose with k8s)The request comes from the web app, from chrome. Below generated curl for bash
I also used tcpflow to see how it looks on the server side (same node where
ingress-nginx-controller
is placed; find it belowresponse headers from chrome - same as above, but it is a copy from chrome 'copy response headers'
kubectl logs $ingresscontrollerpodname -n $ingresscontrollernamespace
all requests above have the same ip. BTW not every request is placed there, I executed a couple more and these were not appended. I tried in general use that, but I do not know what is really is. I also tried to see the access log (I even enabled it with enable-access-log-for-default-backend: "true" - no difference). And just to be sure, inside
ingress-nginx-controller
I invoked:kubectl get svc,ing -A -o wide
As I mentioned in comments,
service/ingress-nginx-controller
was bothLoadBalancer
andNodePort
- no difference, also it hasexternalTrafficPolicy: Local
kubectl describe pod $ingresscontrollerpodname -n $ingresscontrollernamespace
kubectl describe svc $ingresscontrollersvcname -n $ingresscontrollernamespace
kubectl -n $appnamespace describe svc $svcname
kubectl -n $appnamespace describe ing $ingressname
...I have no ingress in the app namespace, because, as I mentioned, I am using TCP services, which directly redirect to the given service. In general
kubectl describe ing -A
gives *No resources found` (my app is working fine on 8000, others on 3000, 9000 etc)kubectl -n $appnamespace logs $apppodname
Only relevant ones, all IP addresses are the same. nginx log format is:
Just for my own purpose, I created a simple nginx docker-compose
with config, which contains only one path (log pattern is the same as above, I get i.e.
$remote_addr
)it returns IP
10.20.18.1
, so the same as iningress-nginx-controller
.