Closed MisterTimn closed 3 years ago
Hello @MisterTimn ,
I was not able to reproduce the problem. Is there any chance that you can help by writing down the hardware/software/other specifications and a reasonable step by step process to reproduce this problem ?
/remove-kind bug
@longwuyuan: The label(s) triage/needs-infomation
cannot be applied, because the repository doesn't have them.
/triage needs-information
Easiest to reproduce I think is in the single node test
Ubuntu 20.04.2, kernel 5.4.0-72-generic (swapoff etc.)
Install CRI-O, with systemd cgroup driver (https://kubernetes.io/docs/setup/production-environment/container-runtimes/#cri-o), version 1.21.0 Install kubeadm and kubelet v 1.21.0
Init cluster using kubeadm (tested with both calico/flannel) and install CNI of choice, e.g. flannel:
kubeadm init --pod-network-cidr=10.244.0.0/16
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl taint nodes --all node-role.kubernetes.io/master-
Install Nginx using helm, default values:
helm install nginx -n nginx nginx-ingress/nginx-ingress
Our setup has a separate Nginx instance as reverse proxy, this resolves our host domain and forwards requests to the nginx-ingress svc via NodePort.
Install hello web-app as test
kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0
kubectl expose deployment web --type=NodePort --port=8080
kubectl apply -f ingress-nginx.yaml
Ingress-nginx.yaml file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-nginx
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /hello
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
Having a nginx reverseproxy in front of ingress-nginx-controller is neither supported nor tested and is some kind of a rare special use-case. It is not even a documented or supported architecture by kubernetes.io .
Please try to reproduce with AWS/GCP/Azure/DO/Metallb or try to send requests directly to the NodePort.
How so @longwuyuan? Isn't this the same as the setup described on the docs, bare metal considerations (self-provisioned edge)? https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#using-a-self-provisioned-edge
We just forward all traffic to the nodeports of nginx-ingress service from there.
The text on that document is not particularly describing a nginx instance on the edge. Its merely says HAProxy can be an example.
The context now is it is user-managed hence the performance is also user-managed hence the timeouts you are reporting is not clearly caused by ingress-nginx-controller but most likely by how the traffic is handled by the edge nginx.
I hope I get time to find the precise test-case code in the repo and clarify if a nginx edge is tested in front of the ingress-nginx-controller.
The feel the next step is for you to experiment with Metallb or deploy highly detailed monitoring on that edge. But a rather quick test you can do is directly send traffic to the nodeport, bypassing the edge nginx, and update here if you see the timeouts even then. Or even go as far out as experimenting with HAProxy in tcp mode listeners.
We already did this, directly to the nodeport, we experience the delays as well there, below some logs from timed curl statements:
$ time curl http://10.10.181.26:30087/hello
^C
real 0m37.080s
user 0m0.006s
sys 0m0.006s
$ time curl http://10.10.181.26:30087/hello
Hello, world!
Version: 1.0.0
Hostname: web-79d88c97d6-7xfms
real 0m0.013s
user 0m0.005s
sys 0m0.005s
$ time curl http://10.10.181.26:30087/hello
Hello, world!
Version: 1.0.0
Hostname: web-79d88c97d6-7xfms
real 0m0.013s
user 0m0.005s
sys 0m0.005s
$ time curl http://10.10.181.26:30087/hello
^C
real 0m16.858s
user 0m0.008s
sys 0m0.004s
The weird thing is that we have been running three clusters with the same setup (nginx reverse proxy -> nginx-ingress) for 2 years now without any issues and it's just this latest installment where we are running into trouble. The traefik deployment doesnt experience any issues but we would like to stick with kubernetes-ingress.
@MisterTimn , my thoughts below ;
(1) Its absolutely impossible for you to have tested this controller. Reasons are as follows ;
helm install nginx -n nginx nginx-ingress/nginx-ingress
, then this whole issue is moot, because, that, is not even a release of this project. The names, being similar, are confusing. This confusion has happend before. You did not even install a release from this project so there is no chance you could have tested a release of this project. This project's install docs are https://kubernetes.github.io/ingress-nginx/deploy/#using-helm (2) Some additional observations ;
I don't see a step that created a service of type "NodePort" for the ingress-nginx-controller. If you installed the controller using the command helm install nginx -n nginx nginx-ingress/nginx-ingress
then the service type created by the helm chart, by default, is of type LoadBalancer
. So, how could you have even sent the traffic through the controller on a NodePort? What is behind 30087 ?
If you used kubectl expose deployment web --type=NodePort --port=8080
then a new service is created, that is of type "NodePort", and the backend for the service is your hello-app pod. If you subsequently send traffic to this NodePort service's "NodePort", then you are completely bypassing the ingress-controller. I suspect 30087 is this NodePort but only you can verify.
Let me know if something is wrong in my comments. Also, discuss on kuberntes.slack.com with other experts and collectively get proof that this is a bug.
@longwuyuan
(1) I installed a release from this project most definitely, just didnt use the exact same command as in the docs (release name differs), to confirm the output of helm repo list
contains the following entry: ingress-nginx https://kubernetes.github.io/ingress-nginx
Chart version: ingress-nginx-3.29.0
App version: 0.45.0
(2) I used the default install with the LoadBalancer type in my latest install yes, but I have used explicit overwrite to NodePort as well. When you set service to LoadBalancer it is my understanding that you can still use the exposed port as a NodePort since the LoadBalancer essentially takes the request to its IP, sends it to a NodePort then sends it to the ClusterIP. But yes this might have been confusing, so to confirm I've set it up again with an explicit node version and I am experiencing the same delays. To confirm that I am using the nodeport of nginx:
Ξ ibcndevs/obelisk → k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-nginx-controller NodePort 10.99.124.85 <none> 80:30087/TCP,443:32318/TCP 45h
nginx-ingress-nginx-controller-admission ClusterIP 10.103.243.232 <none> 443/TCP 45h
The exposing of the service was purely for testing if a regular connection to the web-service had issues as well, not necessary to define NodePort there if you want to reproduce indeed, my bad.
@MisterTimn , can you please show exactly this information as is with command and output, as is, in the same sequence as shown here ;
date ; hostname ; kubectl -n logs | tail -8
I guess some of this command is missing? What logs are you intending? I dont have a logs namespace.
Here is the output of the first two commands, I did launch these commands from my local machine (I dont install helm on the cluster nodes).
Ξ ~ → date ; hostname ; helm ls -A
Wed Apr 28 10:58:52 AM CEST 2021
manjaro
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/jveessen/ibcndevs/obelisk/obelisk/.deployment/.kube/node6.yaml
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/jveessen/ibcndevs/obelisk/obelisk/.deployment/.kube/node6.yaml
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
nginx nginx 3 2021-04-28 08:50:11.849102594 +0200 CEST deployed ingress-nginx-3.29.0 0.45.0
traefik traefik 2 2021-04-26 11:20:00.083182366 +0200 CEST deployed traefik-9.18.2 2.4.8
Ξ ~ → date ; hostname ; kubectl get all,nodes,ing -A -o wide
Wed Apr 28 10:59:06 AM CEST 2021
manjaro
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-system pod/calico-kube-controllers-56cc59554b-t6lhk 1/1 Running 0 47h 192.168.198.8 kubnetcl6 <none> <none>
calico-system pod/calico-node-szvcx 1/1 Running 0 47h 10.10.181.26 kubnetcl6 <none> <none>
calico-system pod/calico-typha-5f6d769569-h8d2b 1/1 Running 0 47h 10.10.181.26 kubnetcl6 <none> <none>
default pod/web-79d88c97d6-7xfms 1/1 Running 0 47h 192.168.198.9 kubnetcl6 <none> <none>
kube-system pod/coredns-558bd4d5db-5txnl 1/1 Running 0 4d18h 192.168.198.3 kubnetcl6 <none> <none>
kube-system pod/coredns-558bd4d5db-xrhjj 1/1 Running 0 4d18h 192.168.198.2 kubnetcl6 <none> <none>
kube-system pod/etcd-kubnetcl6 1/1 Running 0 4d18h 10.10.181.26 kubnetcl6 <none> <none>
kube-system pod/kube-apiserver-kubnetcl6 1/1 Running 0 4d18h 10.10.181.26 kubnetcl6 <none> <none>
kube-system pod/kube-controller-manager-kubnetcl6 1/1 Running 0 4d18h 10.10.181.26 kubnetcl6 <none> <none>
kube-system pod/kube-scheduler-kubnetcl6 1/1 Running 0 4d18h 10.10.181.26 kubnetcl6 <none> <none>
nginx pod/nginx-ingress-nginx-controller-79dfc84789-9stgq 0/1 CrashLoopBackOff 29 128m 192.168.198.24 kubnetcl6 <none> <none>
nginx pod/nginx-ingress-nginx-controller-7cf8459bbc-l8w6k 1/1 Running 0 47h 192.168.198.21 kubnetcl6 <none> <none>
tigera-operator pod/tigera-operator-675ccbb69c-xr6xl 1/1 Running 0 47h 10.10.181.26 kubnetcl6 <none> <none>
traefik pod/traefik-58565f5478-kzlxq 1/1 Running 0 47h 192.168.198.11 kubnetcl6 <none> <none>
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
calico-system service/calico-typha ClusterIP 10.106.55.99 <none> 5473/TCP 2d k8s-app=calico-typha
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d18h <none>
default service/web NodePort 10.110.142.186 <none> 8080:30556/TCP 2d app=web
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 4d18h k8s-app=kube-dns
nginx service/nginx-ingress-nginx-controller NodePort 10.99.124.85 <none> 80:30087/TCP,443:32318/TCP 47h app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx,app.kubernetes.io/name=ingress-nginx
nginx service/nginx-ingress-nginx-controller-admission ClusterIP 10.103.243.232 <none> 443/TCP 47h app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx,app.kubernetes.io/name=ingress-nginx
traefik service/traefik LoadBalancer 10.103.33.125 <pending> 80:30913/TCP,443:32707/TCP 47h app.kubernetes.io/instance=traefik,app.kubernetes.io/name=traefik
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
calico-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 2d calico-node docker.io/calico/node:v3.18.1 k8s-app=calico-node
kube-system daemonset.apps/kube-proxy 0 0 0 0 0 kubernetes.io/os=linux,non-calico=true 4d18h kube-proxy k8s.gcr.io/kube-proxy:v1.21.0 k8s-app=kube-proxy
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
calico-system deployment.apps/calico-kube-controllers 1/1 1 1 2d calico-kube-controllers docker.io/calico/kube-controllers:v3.18.1 k8s-app=calico-kube-controllers
calico-system deployment.apps/calico-typha 1/1 1 1 2d calico-typha docker.io/calico/typha:v3.18.1 k8s-app=calico-typha
default deployment.apps/web 1/1 1 1 2d hello-app gcr.io/google-samples/hello-app:1.0 app=web
kube-system deployment.apps/coredns 2/2 2 2 4d18h coredns k8s.gcr.io/coredns/coredns:v1.8.0 k8s-app=kube-dns
nginx deployment.apps/nginx-ingress-nginx-controller 1/1 1 1 47h controller k8s.gcr.io/ingress-nginx/controller:v0.45.0@sha256:c4390c53f348c3bd4e60a5dd6a11c35799ae78c49388090140b9d72ccede1755 app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx,app.kubernetes.io/name=ingress-nginx
tigera-operator deployment.apps/tigera-operator 1/1 1 1 2d tigera-operator quay.io/tigera/operator:v1.15.1 name=tigera-operator
traefik deployment.apps/traefik 1/1 1 1 47h traefik traefik:2.4.8 app.kubernetes.io/instance=traefik,app.kubernetes.io/name=traefik
NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
calico-system replicaset.apps/calico-kube-controllers-56cc59554b 1 1 1 47h calico-kube-controllers docker.io/calico/kube-controllers:v3.18.1 k8s-app=calico-kube-controllers,pod-template-hash=56cc59554b
calico-system replicaset.apps/calico-kube-controllers-5cbf59cb6f 0 0 0 2d calico-kube-controllers docker.io/calico/kube-controllers:v3.18.1 k8s-app=calico-kube-controllers,pod-template-hash=5cbf59cb6f
calico-system replicaset.apps/calico-typha-5f6d769569 1 1 1 47h calico-typha docker.io/calico/typha:v3.18.1 k8s-app=calico-typha,pod-template-hash=5f6d769569
calico-system replicaset.apps/calico-typha-f5d7595f4 0 0 0 2d calico-typha docker.io/calico/typha:v3.18.1 k8s-app=calico-typha,pod-template-hash=f5d7595f4
default replicaset.apps/web-79d88c97d6 1 1 1 2d hello-app gcr.io/google-samples/hello-app:1.0 app=web,pod-template-hash=79d88c97d6
kube-system replicaset.apps/coredns-558bd4d5db 2 2 2 4d18h coredns k8s.gcr.io/coredns/coredns:v1.8.0 k8s-app=kube-dns,pod-template-hash=558bd4d5db
nginx replicaset.apps/nginx-ingress-nginx-controller-79dfc84789 1 1 0 47h controller k8s.gcr.io/ingress-nginx/controller:v0.45.0@sha256:c4390c53f348c3bd4e60a5dd6a11c35799ae78c49388090140b9d72ccede1755 app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=79dfc84789
nginx replicaset.apps/nginx-ingress-nginx-controller-7cf8459bbc 1 1 1 47h controller k8s.gcr.io/ingress-nginx/controller:v0.35.0@sha256:fc4979d8b8443a831c9789b5155cded454cb7de737a8b727bc2ba0106d2eae8b app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7cf8459bbc
tigera-operator replicaset.apps/tigera-operator-675ccbb69c 1 1 1 2d tigera-operator quay.io/tigera/operator:v1.15.1 name=tigera-operator,pod-template-hash=675ccbb69c
traefik replicaset.apps/traefik-58565f5478 1 1 1 47h traefik traefik:2.4.8 app.kubernetes.io/instance=traefik,app.kubernetes.io/name=traefik,pod-template-hash=58565f5478
NAMESPACE NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node/kubnetcl6 Ready control-plane,master 4d18h v1.21.0 10.10.181.26 <none> Ubuntu 20.04.2 LTS 5.4.0-72-generic cri-o://1.21.0
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
default ingress.networking.k8s.io/ingress-nginx nginx * 10.99.124.85 80 47h
default ingress.networking.k8s.io/ingress-traefik traefik * 10.99.124.85 80 47h
Noticing that since I changed the helm deployment this morning to explicitly state NodePort theres a crash backoff of the nginx pod, here are its logs
Ξ ~ → k logs nginx-ingress-nginx-controller-79dfc84789-mdfr2
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: v0.45.0
Build: 7365e9eeb2f4961ef94e4ce5eb2b6e1bdb55ce5c
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.6
-------------------------------------------------------------------------------
I0428 09:04:15.165390 7 flags.go:208] "Watching for Ingress" class="nginx"
W0428 09:04:15.165432 7 flags.go:213] Ingresses with an empty class will also be processed by this Ingress controller
W0428 09:04:15.166194 7 client_config.go:614] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0428 09:04:15.166654 7 main.go:241] "Creating API client" host="https://10.96.0.1:443"
I0428 09:04:15.179208 7 main.go:285] "Running in Kubernetes cluster" major="1" minor="21" git="v1.21.0" state="clean" commit="cb303e613a121a29364f75cc67d3d580833a7479" platform="linux/amd64"
I0428 09:04:15.341499 7 main.go:105] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0428 09:04:15.342818 7 main.go:115] "Enabling new Ingress features available since Kubernetes v1.18"
E0428 09:04:15.348002 7 main.go:134] Invalid IngressClass (Spec.Controller) value "nginx.org/ingress-controller". Should be "k8s.io/ingress-nginx"
F0428 09:04:15.348048 7 main.go:135] IngressClass with name nginx is not valid for ingress-nginx (invalid Spec.Controller)
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc00000e001, 0xc0004de000, 0x81, 0x1e1)
k8s.io/klog/v2@v2.4.0/klog.go:1026 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x26915e0, 0xc000000003, 0x0, 0x0, 0xc000164fc0, 0x25e6ecb, 0x7, 0x87, 0x0)
k8s.io/klog/v2@v2.4.0/klog.go:975 +0x19b
k8s.io/klog/v2.(*loggingT).printf(0x26915e0, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1a64137, 0x52, 0xc0000cdb10, 0x1, ...)
k8s.io/klog/v2@v2.4.0/klog.go:750 +0x191
k8s.io/klog/v2.Fatalf(...)
k8s.io/klog/v2@v2.4.0/klog.go:1502
main.main()
k8s.io/ingress-nginx/cmd/nginx/main.go:135 +0xf06
goroutine 6 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x26915e0)
k8s.io/klog/v2@v2.4.0/klog.go:1169 +0x8b
created by k8s.io/klog/v2.init.0
k8s.io/klog/v2@v2.4.0/klog.go:417 +0xdf
goroutine 81 [IO wait]:
internal/poll.runtime_pollWait(0x7f3a662b19d8, 0x72, 0x1c021a0)
runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc00003a718, 0x72, 0x1c02100, 0x2601608, 0x0)
internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc00003a700, 0xc000280000, 0x8dd, 0x8dd, 0x0, 0x0, 0x0)
internal/poll/fd_unix.go:159 +0x1a5
net.(*netFD).Read(0xc00003a700, 0xc000280000, 0x8dd, 0x8dd, 0x203000, 0x74b2db, 0xc000188160)
net/fd_posix.go:55 +0x4f
net.(*conn).Read(0xc000590008, 0xc000280000, 0x8dd, 0x8dd, 0x0, 0x0, 0x0)
net/net.go:182 +0x8e
crypto/tls.(*atLeastReader).Read(0xc00000c460, 0xc000280000, 0x8dd, 0x8dd, 0x406, 0x8d8, 0xc00050f710)
crypto/tls/conn.go:779 +0x62
bytes.(*Buffer).ReadFrom(0xc000188280, 0x1bfe120, 0xc00000c460, 0x40b665, 0x181b260, 0x198a1c0)
bytes/buffer.go:204 +0xb1
crypto/tls.(*Conn).readFromUntil(0xc000188000, 0x1c00400, 0xc000590008, 0x5, 0xc000590008, 0x3f5)
crypto/tls/conn.go:801 +0xf3
crypto/tls.(*Conn).readRecordOrCCS(0xc000188000, 0x0, 0x0, 0xc00050fd18)
crypto/tls/conn.go:608 +0x115
crypto/tls.(*Conn).readRecord(...)
crypto/tls/conn.go:576
crypto/tls.(*Conn).Read(0xc000188000, 0xc00041a000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
crypto/tls/conn.go:1252 +0x15f
bufio.(*Reader).Read(0xc00068cd80, 0xc0000282d8, 0x9, 0x9, 0xc00050fd18, 0x1abfa00, 0x9551ab)
bufio/bufio.go:227 +0x222
io.ReadAtLeast(0x1bfdf80, 0xc00068cd80, 0xc0000282d8, 0x9, 0x9, 0x9, 0xc000116050, 0x0, 0x1bfe300)
io/io.go:314 +0x87
io.ReadFull(...)
io/io.go:333
golang.org/x/net/http2.readFrameHeader(0xc0000282d8, 0x9, 0x9, 0x1bfdf80, 0xc00068cd80, 0x0, 0x0, 0xc00050fdd0, 0x46cf65)
golang.org/x/net@v0.0.0-20201110031124-69a78807bb2b/http2/frame.go:237 +0x89
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0000282a0, 0xc00056a600, 0x0, 0x0, 0x0)
golang.org/x/net@v0.0.0-20201110031124-69a78807bb2b/http2/frame.go:492 +0xa5
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00050ffa8, 0x0, 0x0)
golang.org/x/net@v0.0.0-20201110031124-69a78807bb2b/http2/transport.go:1819 +0xd8
golang.org/x/net/http2.(*ClientConn).readLoop(0xc00049d380)
golang.org/x/net@v0.0.0-20201110031124-69a78807bb2b/http2/transport.go:1741 +0x6f
created by golang.org/x/net/http2.(*Transport).newClientConn
golang.org/x/net@v0.0.0-20201110031124-69a78807bb2b/http2/transport.go:705 +0x6c5
These are my values:
~ → helm get values nginx
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/jveessen/ibcndevs/obelisk/obelisk/.deployment/.kube/node6.yaml
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/jveessen/ibcndevs/obelisk/obelisk/.deployment/.kube/node6.yaml
USER-SUPPLIED VALUES:
controller:
service:
type: NodePort
I fixed the md syntax on the commands. Please check. Kindly show this from your cluster where you reproduced with the time prefix to curl
sorry for trouble .. please look at commands list ... delete your previous post of that data .. and kindly re-post all those commands and outputs one more time
I can already see a problem You have 2 controller pods and one is in crashlopbackoff and you are expected to have only 1 pod as the controller is isntalled as a deployment and not a daemonset.
I think we can make faster progress if you close this issue and come on kubernetes.slack.com channel ingress-nginx
@longwuyuan I got a clean cluster and performed the commands you asked for. Thought it best to re-open this issue and provide the information here
Also provided some info on the web backend I deployed and how I deployed ingress controller
helm install nginx ingress-nginx/ingress-nginx --set controller.service.type=NodePort --set controller.service.nodePorts.http="30087"
kubectl apply -f ingressClass-nginx.yaml
# Contents ingressClass-nginx.yaml
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx
spec:
controller: k8s.io/ingress-nginx
kubectl apply -f web.yaml
# Contents web.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
selector:
matchLabels:
app: web
replicas: 1
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
kubectl expose deployment web --type=NodePort --port=80
kubectl apply -f ingress-nginx.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-nginx
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 80
The kubernetes cluster is single node, k8s 1.21.0, its IP is 10.10.181.26 (kubnetcl6) IPv6 was disabled along with dnsmasq, systemd-resolved, rpc (before installing the kubernetes cluster). Calico installed with default settings/tigera operator (https://docs.projectcalico.org/getting-started/kubernetes/quickstart)
root@kubnetcl6:~# date ; hostname ; helm ls -A
Thu 29 Apr 2021 10:53:06 AM CEST
kubnetcl6
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
nginx nginx 1 2021-04-28 14:28:19.59457436 +0200 CEST deployed ingress-nginx-3.29.0 0.45.0
root@kubnetcl6:~# date ; hostname ; kubectl get all,nodes,ing -A -o wide
Thu 29 Apr 2021 11:01:28 AM CEST
kubnetcl6
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-system pod/calico-kube-controllers-665d4888cd-dxd67 1/1 Running 0 20h 192.168.198.1 kubnetcl6 <none> <none>
calico-system pod/calico-node-7cxgd 1/1 Running 0 20h 10.10.181.26 kubnetcl6 <none> <none>
calico-system pod/calico-typha-57c586cd47-jgk94 1/1 Running 0 20h 10.10.181.26 kubnetcl6 <none> <none>
default pod/web-5b8c89f8db-pbljk 1/1 Running 0 19h 192.168.198.8 kubnetcl6 <none> <none>
kube-system pod/coredns-558bd4d5db-2hlbs 1/1 Running 0 20h 192.168.198.2 kubnetcl6 <none> <none>
kube-system pod/coredns-558bd4d5db-qd8cf 1/1 Running 0 20h 192.168.198.3 kubnetcl6 <none> <none>
kube-system pod/etcd-kubnetcl6 1/1 Running 0 20h 10.10.181.26 kubnetcl6 <none> <none>
kube-system pod/kube-apiserver-kubnetcl6 1/1 Running 0 20h 10.10.181.26 kubnetcl6 <none> <none>
kube-system pod/kube-controller-manager-kubnetcl6 1/1 Running 0 20h 10.10.181.26 kubnetcl6 <none> <none>
kube-system pod/kube-proxy-4ck7w 1/1 Running 0 20h 10.10.181.26 kubnetcl6 <none> <none>
kube-system pod/kube-scheduler-kubnetcl6 1/1 Running 0 20h 10.10.181.26 kubnetcl6 <none> <none>
nginx pod/nginx-ingress-nginx-controller-79dfc84789-28r5n 1/1 Running 1 20h 192.168.198.5 kubnetcl6 <none> <none>
tigera-operator pod/tigera-operator-8686b6fc5c-xs8cl 1/1 Running 0 20h 10.10.181.26 kubnetcl6 <none> <none>
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
calico-system service/calico-typha ClusterIP 10.97.120.118 <none> 5473/TCP 20h k8s-app=calico-typha
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20h <none>
default service/web NodePort 10.101.112.64 <none> 80:31854/TCP 19h app=web
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 20h k8s-app=kube-dns
nginx service/nginx-ingress-nginx-controller NodePort 10.104.39.139 <none> 80:30087/TCP,443:32012/TCP 20h app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx,app.kubernetes.io/name=ingress-nginx
nginx service/nginx-ingress-nginx-controller-admission ClusterIP 10.100.130.192 <none> 443/TCP 20h app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx,app.kubernetes.io/name=ingress-nginx
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
calico-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 20h calico-node docker.io/calico/node:v3.18.2 k8s-app=calico-node
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 20h kube-proxy k8s.gcr.io/kube-proxy:v1.21.0 k8s-app=kube-proxy
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
calico-system deployment.apps/calico-kube-controllers 1/1 1 1 20h calico-kube-controllers docker.io/calico/kube-controllers:v3.18.2 k8s-app=calico-kube-controllers
calico-system deployment.apps/calico-typha 1/1 1 1 20h calico-typha docker.io/calico/typha:v3.18.2 k8s-app=calico-typha
default deployment.apps/web 1/1 1 1 20h nginx nginx:alpine app=web
kube-system deployment.apps/coredns 2/2 2 2 20h coredns k8s.gcr.io/coredns/coredns:v1.8.0 k8s-app=kube-dns
nginx deployment.apps/nginx-ingress-nginx-controller 1/1 1 1 20h controller k8s.gcr.io/ingress-nginx/controller:v0.45.0@sha256:c4390c53f348c3bd4e60a5dd6a11c35799ae78c49388090140b9d72ccede1755 app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx,app.kubernetes.io/name=ingress-nginx
tigera-operator deployment.apps/tigera-operator 1/1 1 1 20h tigera-operator quay.io/tigera/operator:v1.15.2 name=tigera-operator
NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
calico-system replicaset.apps/calico-kube-controllers-665d4888cd 1 1 1 20h calico-kube-controllers docker.io/calico/kube-controllers:v3.18.2 k8s-app=calico-kube-controllers,pod-template-hash=665d4888cd
calico-system replicaset.apps/calico-typha-57c586cd47 1 1 1 20h calico-typha docker.io/calico/typha:v3.18.2 k8s-app=calico-typha,pod-template-hash=57c586cd47
default replicaset.apps/web-5b8c89f8db 1 1 1 19h nginx nginx:alpine app=web,pod-template-hash=5b8c89f8db
default replicaset.apps/web-6b59dc98b6 0 0 0 20h nginx nginx:alpine app=web,pod-template-hash=6b59dc98b6
kube-system replicaset.apps/coredns-558bd4d5db 2 2 2 20h coredns k8s.gcr.io/coredns/coredns:v1.8.0 k8s-app=kube-dns,pod-template-hash=558bd4d5db
nginx replicaset.apps/nginx-ingress-nginx-controller-79dfc84789 1 1 1 20h controller k8s.gcr.io/ingress-nginx/controller:v0.45.0@sha256:c4390c53f348c3bd4e60a5dd6a11c35799ae78c49388090140b9d72ccede1755 app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=79dfc84789
tigera-operator replicaset.apps/tigera-operator-8686b6fc5c 1 1 1 20h tigera-operator quay.io/tigera/operator:v1.15.2 name=tigera-operator,pod-template-hash=8686b6fc5c
NAMESPACE NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node/kubnetcl6 Ready control-plane,master 20h v1.21.0 10.10.181.26 <none> Ubuntu 20.04.2 LTS 5.4.0-72-generic cri-o://1.21.0
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
default ingress.networking.k8s.io/ingress-nginx nginx * 10.104.39.139 80 19h
The following commands were repeated between succesful and unsuccessful curl commands:
date ; hostname ; kubectl -n nginx logs nginx-ingress-nginx-controller-79dfc84789-28r5n | tail -8
date ; hostname ; time curl 10.10.181.26:30087
date ; hostname ; kubectl -n nginx logs nginx-ingress-nginx-controller-79dfc84789-28r5n | tail -8
So here we are directly requesting on the NodePort service of the nginx-ingress controller The following is one successful request followed by an unsuccessful one
root@kubnetcl6:~# date ; hostname ; kubectl -n nginx logs nginx-ingress-nginx-controller-79dfc84789-28r5n | tail -8
Thu 29 Apr 2021 11:03:20 AM CEST
kubnetcl6
10.10.181.26 - - [29/Apr/2021:08:48:12 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.002 [default-web-80] [] 192.168.198.8:80 612 0.004 200 49685d4f3303c4df492520c612d47ad2
10.10.181.26 - - [29/Apr/2021:08:48:12 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.002 [default-web-80] [] 192.168.198.8:80 612 0.000 200 0315735541ae02c003b9dbfbf60b7a59
10.10.181.26 - - [29/Apr/2021:08:48:13 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.004 200 d222137dc45b71f3608bf80ffc1172dd
10.10.181.26 - - [29/Apr/2021:08:48:14 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.000 [default-web-80] [] 192.168.198.8:80 612 0.004 200 b027ae16dd68fd1085169e2dbc3699a0
10.10.181.26 - - [29/Apr/2021:08:48:15 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.004 200 319a02edd451049de5cd937d0b707159
10.10.181.26 - - [29/Apr/2021:08:48:15 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.000 200 90ef3d0155b5dbfdd9d9a66575d112be
10.10.181.26 - - [29/Apr/2021:08:48:16 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.002 [default-web-80] [] 192.168.198.8:80 612 0.004 200 376674453e8a0b5993b76880a6d20126
10.10.181.26 - - [29/Apr/2021:08:48:17 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.000 200 154829f976f54df5b54ddd9abb672b9b
root@kubnetcl6:~# date ; hostname ; time curl 10.10.181.26:30087
Thu 29 Apr 2021 11:03:31 AM CEST
kubnetcl6
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
real 0m0.021s
user 0m0.011s
sys 0m0.005s
root@kubnetcl6:~# date ; hostname ; kubectl -n nginx logs nginx-ingress-nginx-controller-79dfc84789-28r5n | tail -8
Thu 29 Apr 2021 11:03:39 AM CEST
kubnetcl6
10.10.181.26 - - [29/Apr/2021:08:48:12 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.002 [default-web-80] [] 192.168.198.8:80 612 0.000 200 0315735541ae02c003b9dbfbf60b7a59
10.10.181.26 - - [29/Apr/2021:08:48:13 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.004 200 d222137dc45b71f3608bf80ffc1172dd
10.10.181.26 - - [29/Apr/2021:08:48:14 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.000 [default-web-80] [] 192.168.198.8:80 612 0.004 200 b027ae16dd68fd1085169e2dbc3699a0
10.10.181.26 - - [29/Apr/2021:08:48:15 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.004 200 319a02edd451049de5cd937d0b707159
10.10.181.26 - - [29/Apr/2021:08:48:15 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.000 200 90ef3d0155b5dbfdd9d9a66575d112be
10.10.181.26 - - [29/Apr/2021:08:48:16 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.002 [default-web-80] [] 192.168.198.8:80 612 0.004 200 376674453e8a0b5993b76880a6d20126
10.10.181.26 - - [29/Apr/2021:08:48:17 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.000 200 154829f976f54df5b54ddd9abb672b9b
10.10.181.26 - - [29/Apr/2021:09:03:32 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.68.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.000 200 7b877c4733b0d56950344e5eae165536
root@kubnetcl6:~# date ; hostname ; time curl 10.10.181.26:30087
Thu 29 Apr 2021 11:03:41 AM CEST
kubnetcl6
^C
real 3m3.440s
user 0m0.017s
sys 0m0.005s
root@kubnetcl6:~# date ; hostname ; kubectl -n nginx logs nginx-ingress-nginx-controller-79dfc84789-28r5n | tail -8
Thu 29 Apr 2021 11:06:49 AM CEST
kubnetcl6
10.10.181.26 - - [29/Apr/2021:08:48:12 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.002 [default-web-80] [] 192.168.198.8:80 612 0.000 200 0315735541ae02c003b9dbfbf60b7a59
10.10.181.26 - - [29/Apr/2021:08:48:13 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.004 200 d222137dc45b71f3608bf80ffc1172dd
10.10.181.26 - - [29/Apr/2021:08:48:14 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.000 [default-web-80] [] 192.168.198.8:80 612 0.004 200 b027ae16dd68fd1085169e2dbc3699a0
10.10.181.26 - - [29/Apr/2021:08:48:15 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.004 200 319a02edd451049de5cd937d0b707159
10.10.181.26 - - [29/Apr/2021:08:48:15 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.000 200 90ef3d0155b5dbfdd9d9a66575d112be
10.10.181.26 - - [29/Apr/2021:08:48:16 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.002 [default-web-80] [] 192.168.198.8:80 612 0.004 200 376674453e8a0b5993b76880a6d20126
10.10.181.26 - - [29/Apr/2021:08:48:17 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.000 200 154829f976f54df5b54ddd9abb672b9b
10.10.181.26 - - [29/Apr/2021:09:03:32 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.68.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.000 200 7b877c4733b0d56950344e5eae165536
root@kubnetcl6:~#
Using curl -v
for more info
root@kubnetcl6:~# date ; hostname ; kubectl -n nginx logs nginx-ingress-nginx-controller-79dfc84789-28r5n | tail -8
Thu 29 Apr 2021 11:22:14 AM CEST
kubnetcl6
10.10.181.26 - - [29/Apr/2021:08:48:13 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.004 200 d222137dc45b71f3608bf80ffc1172dd
10.10.181.26 - - [29/Apr/2021:08:48:14 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.000 [default-web-80] [] 192.168.198.8:80 612 0.004 200 b027ae16dd68fd1085169e2dbc3699a0
10.10.181.26 - - [29/Apr/2021:08:48:15 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.004 200 319a02edd451049de5cd937d0b707159
10.10.181.26 - - [29/Apr/2021:08:48:15 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.000 200 90ef3d0155b5dbfdd9d9a66575d112be
10.10.181.26 - - [29/Apr/2021:08:48:16 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.002 [default-web-80] [] 192.168.198.8:80 612 0.004 200 376674453e8a0b5993b76880a6d20126
10.10.181.26 - - [29/Apr/2021:08:48:17 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.000 200 154829f976f54df5b54ddd9abb672b9b
10.10.181.26 - - [29/Apr/2021:09:03:32 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.68.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.000 200 7b877c4733b0d56950344e5eae165536
10.10.181.26 - - [29/Apr/2021:09:22:08 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.68.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.000 200 d31533b31bf93588aced78892965fa0a
root@kubnetcl6:~# date ; hostname ; time curl -v 10.10.181.26:30087
Thu 29 Apr 2021 11:22:15 AM CEST
kubnetcl6
* Trying 10.10.181.26:30087...
* TCP_NODELAY set
* Connected to 10.10.181.26 (10.10.181.26) port 30087 (#0)
> GET / HTTP/1.1
> Host: 10.10.181.26:30087
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Thu, 29 Apr 2021 09:22:15 GMT
< Content-Type: text/html
< Content-Length: 612
< Connection: keep-alive
< Last-Modified: Tue, 13 Apr 2021 15:50:50 GMT
< ETag: "6075bdda-264"
< Accept-Ranges: bytes
<
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
* Connection #0 to host 10.10.181.26 left intact
real 0m0.020s
user 0m0.012s
sys 0m0.004s
root@kubnetcl6:~# date ; hostname ; kubectl -n nginx logs nginx-ingress-nginx-controller-79dfc84789-28r5n | tail -8
Thu 29 Apr 2021 11:22:16 AM CEST
kubnetcl6
10.10.181.26 - - [29/Apr/2021:08:48:14 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.000 [default-web-80] [] 192.168.198.8:80 612 0.004 200 b027ae16dd68fd1085169e2dbc3699a0
10.10.181.26 - - [29/Apr/2021:08:48:15 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.004 200 319a02edd451049de5cd937d0b707159
10.10.181.26 - - [29/Apr/2021:08:48:15 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.000 200 90ef3d0155b5dbfdd9d9a66575d112be
10.10.181.26 - - [29/Apr/2021:08:48:16 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.002 [default-web-80] [] 192.168.198.8:80 612 0.004 200 376674453e8a0b5993b76880a6d20126
10.10.181.26 - - [29/Apr/2021:08:48:17 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.000 200 154829f976f54df5b54ddd9abb672b9b
10.10.181.26 - - [29/Apr/2021:09:03:32 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.68.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.000 200 7b877c4733b0d56950344e5eae165536
10.10.181.26 - - [29/Apr/2021:09:22:08 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.68.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.000 200 d31533b31bf93588aced78892965fa0a
10.10.181.26 - - [29/Apr/2021:09:22:15 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.68.0" 82 0.002 [default-web-80] [] 192.168.198.8:80 612 0.004 200 351a95bfef7173756c3aae3bf25e71d6
root@kubnetcl6:~# date ; hostname ; time curl -v 10.10.181.26:30087
Thu 29 Apr 2021 11:22:18 AM CEST
kubnetcl6
* Trying 10.10.181.26:30087...
* TCP_NODELAY set
* Connected to 10.10.181.26 (10.10.181.26) port 30087 (#0)
> GET / HTTP/1.1
> Host: 10.10.181.26:30087
> User-Agent: curl/7.68.0
> Accept: */*
>
^C
real 0m59.496s
user 0m0.013s
sys 0m0.005s
root@kubnetcl6:~# date ; hostname ; kubectl -n nginx logs nginx-ingress-nginx-controller-79dfc84789-28r5n | tail -8
Thu 29 Apr 2021 11:23:20 AM CEST
kubnetcl6
10.10.181.26 - - [29/Apr/2021:08:48:14 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.000 [default-web-80] [] 192.168.198.8:80 612 0.004 200 b027ae16dd68fd1085169e2dbc3699a0
10.10.181.26 - - [29/Apr/2021:08:48:15 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.004 200 319a02edd451049de5cd937d0b707159
10.10.181.26 - - [29/Apr/2021:08:48:15 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.000 200 90ef3d0155b5dbfdd9d9a66575d112be
10.10.181.26 - - [29/Apr/2021:08:48:16 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.002 [default-web-80] [] 192.168.198.8:80 612 0.004 200 376674453e8a0b5993b76880a6d20126
10.10.181.26 - - [29/Apr/2021:08:48:17 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.75.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.000 200 154829f976f54df5b54ddd9abb672b9b
10.10.181.26 - - [29/Apr/2021:09:03:32 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.68.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.000 200 7b877c4733b0d56950344e5eae165536
10.10.181.26 - - [29/Apr/2021:09:22:08 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.68.0" 82 0.001 [default-web-80] [] 192.168.198.8:80 612 0.000 200 d31533b31bf93588aced78892965fa0a
10.10.181.26 - - [29/Apr/2021:09:22:15 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.68.0" 82 0.002 [default-web-80] [] 192.168.198.8:80 612 0.004 200 351a95bfef7173756c3aae3bf25e71d6
root@kubnetcl6:~#
Requesting from my local machine to the node yields the same results.
Traceroute from the cluster node:
root@kubnetcl6:~# traceroute 10.10.181.26
traceroute to 10.10.181.26 (10.10.181.26), 30 hops max, 60 byte packets
1 kubnetcl6.kubeprod (10.10.181.26) 0.074 ms 0.034 ms 0.027 ms
From local machine:
traceroute to 10.10.181.26 (10.10.181.26), 30 hops max, 60 byte packets
1 192.168.124.1 (192.168.124.1) 10.881 ms 23.203 ms 35.082 ms
2 hal.ilabt.imec.be (193.191.148.193) 35.103 ms 35.135 ms 35.136 ms
3 10.10.181.26 (10.10.181.26) 35.133 ms 35.129 ms 35.125 ms
root@kubnetcl6:~# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:30087 0.0.0.0:* LISTEN 2971/kube-proxy
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 2728/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 2971/kube-proxy
tcp 0 0 127.0.0.1:9098 0.0.0.0:* LISTEN 3594/calico-typha
tcp 0 0 0.0.0.0:10250 0.0.0.0:* LISTEN 2728/kubelet
tcp 0 0 127.0.0.1:9099 0.0.0.0:* LISTEN 4817/calico-node
tcp 0 0 0.0.0.0:6443 0.0.0.0:* LISTEN 2557/kube-apiserver
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 2426/etcd
tcp 0 0 10.10.181.26:2379 0.0.0.0:* LISTEN 2426/etcd
tcp 0 0 0.0.0.0:32012 0.0.0.0:* LISTEN 2971/kube-proxy
tcp 0 0 10.10.181.26:2380 0.0.0.0:* LISTEN 2426/etcd
tcp 0 0 127.0.0.1:2381 0.0.0.0:* LISTEN 2426/etcd
tcp 0 0 127.0.0.1:32877 0.0.0.0:* LISTEN 1199/crio
tcp 0 0 0.0.0.0:31854 0.0.0.0:* LISTEN 2971/kube-proxy
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/init
tcp 0 0 0.0.0.0:10256 0.0.0.0:* LISTEN 2971/kube-proxy
tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN 2147/kube-controlle
tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 2285/kube-scheduler
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1221/sshd: /usr/sbi
tcp 0 0 0.0.0.0:5473 0.0.0.0:* LISTEN 3594/calico-typha
root@kubnetcl6:~# cat /etc/resolv.conf
nameserver 8.8.8.8
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
@MisterTimn did you find any solution for your problem?
After multiple different setups (different CNIs, configurations) we switched to Traefik and haven't looked back.
@MisterTimn could be that i run into the same issue like you (which first showed up at random timeouts in configured ingress objects). I found a workaround for the problem over here:
https://github.com/kubernetes/ingress-nginx/issues/6141
And my full story / more details are here:
https://github.com/cri-o/cri-o/issues/5779
@longwuyuan i think the issue might be related to crio and not to ingress-nginx (because cannot reproduce with docker as container engine) - thats why i opened up a ticket over at crio project #jfyi
NGINX Ingress controller version:
Kubernetes version (use
kubectl version
): 1.21.0 (also tested 1.19 and 1.20)Environment:
uname -a
): 5.4.0-72-genericWhat happened: Over the last few days we have been troubleshooting issues with a new cluster on-prem. We were experiencing time-outs on our requests rather nondeterministic. 50-60% of requests returned directly, the rest timed out after 1 minute. After tcpdumping and wiresharking a lot of pods we went into a loophole of switching flannel to calico because of possible kernel issues, we used different or no encapsulation, went from IPtables to eBPF and calico as kube-proxy, trying different k8s cluster versions, trying different nginx-ingress helm chart versions all the way back to 2.18., all with the same results.
In a last ditch effort I installed Traefik as Ingress Controller instead of nginx-ingress and lo and behold, no time-outs and rapid responses. Right now running on a 1.21 single node cluster, with Calico eBPF. A default install of nginx timing out sporadically and the Traefik ingress working without a problem.
What you expected to happen: Timely and reproducible results when requesting ingress paths. We don't know if this is an issue with nginx or if this is the result still of something weird in our cluster configuration. Some setting that Traefik van work with but Nginx clearly can't. We run nginx-ingress on 3 different clusters with varying kubernetes versions all without issue. The only difference being Ubuntu versions (1.18 vs 1.20) and kernel (4.x vs 5.4). This is why we went through the trouble of trying different networking setups, we thought the packets were getting dropped at kube-proxy or by the CNI.
How to reproduce it: I think this is rather hard to reproduce because I think it might depend on the environment/underlying kernel?
We installed ingress via helm, setup a basic ingress rule
Then refreshed browser or repeatedly ran curl requests
curl -k https://rc.obelisk.ilabt.imec.be/hello
/kind bug