kubernetes / ingress-nginx

Ingress NGINX Controller for Kubernetes
https://kubernetes.github.io/ingress-nginx/
Apache License 2.0
17.41k stars 8.24k forks source link

Can not visit my service running on kubernetes cluster by ingress #6506

Closed pangwawa closed 2 years ago

pangwawa commented 3 years ago

Describe

I try to visit my service running on kubernetes cluster by ingress,but faill

tell me why please, thanks

To Reproduce cat faskapp-deployment.yaml `

apiVersion: apps/v1
kind: Deployment
metadata:
  name: flaskapp-1
spec:
  selector:
    matchLabels:
      run: flaskapp-1
  replicas: 1
  template:
    metadata:
      labels:
        run: flaskapp-1
    spec:
      containers:
      - name: flaskapp-1
        image: jcdemo/flaskapp
        ports:
        - containerPort: 5000

`

cat faskapp-service.yaml


apiVersion: v1
kind: Service
metadata:
  name: flaskapp-1
  labels:
    run: flaskapp-1
spec:
  ports:
  - port: 5000
    name: web
    targetPort: 5000 
  selector:
    run: flaskapp-1

cat faskapp-ingress.yaml

`

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: faskapp-ingress
  #kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: mydemo.com     
    http:
      paths:
      - path: /
        backend:
          serviceName: flaskapp-1  
          servicePort: 5000 

kubectl describe ingress

Name:             faskapp-ingress
Namespace:        default
Address:          192.168.161.121
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host        Path  Backends
  ----        ----  --------
  mydemo.com  
              /   flaskapp-1:5000   10.244.97.193:5000)
Annotations:  <none>
Events:
  Type    Reason  Age                    From                      Message
  ----    ------  ----                   ----                      -------
  Normal  Sync    2m14s (x2 over 2m25s)  nginx-ingress-controller  Scheduled for sync

My environment kubernetes version : v1.19.0 ingress-nginx version : v0.41.0

-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v0.41.0
  Build:         f3a6b809bd4bb6608266b35cf5b8423bf107d7bc
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.19.4
-------------------------------------------------------------------------------

Expected behavior I want to visit my service through domain name ' mydemo.com' ,domain name resolution is ok

Additional context the envirnoment is fine, and I have tried to visit my service by "NodePort Type" , it work

FarhanSajid1 commented 2 years ago

Facing the same issue

FarhanSajid1 commented 2 years ago

I’ve only seen this when I’ve added 2 dns entries: example.com api.example.com

Is there something that I’m missing?

angelosnm commented 2 years ago

extremely funny so many people facing this issue and no any proper answer has been added.

goors commented 2 years ago

Same here. I am running on 3 physical nodes. Stuck in

I0625 19:48:55.872416       6 status.go:299] "updating Ingress status" namespace="default" ingress="echo-ingress" currentValue=[] newValue=[{IP:192.168.253.112 Hostname: Ports:[]}]
I0625 19:48:55.878828       6 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"echo-ingress", UID:"7afc9711-34d6-4452-8c79-769902d0d843", APIVersion:"networking.k8s.io/v1", ResourceVersion:"27064", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
dinobagio commented 2 years ago

Do you have any other ingress ?

Can you reach this ingress from outside your cluster ?

On Sat, 25 Jun 2022, 21:52 Nikola, @.***> wrote:

Same here. I am running on 3 physical nodes. Stuck in

I0625 19:48:55.872416 6 status.go:299] "updating Ingress status" namespace="default" ingress="echo-ingress" currentValue=[] newValue=[{IP:192.168.253.112 Hostname: Ports:[]}] I0625 19:48:55.878828 6 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"echo-ingress", UID:"7afc9711-34d6-4452-8c79-769902d0d843", APIVersion:"networking.k8s.io/v1", ResourceVersion:"27064", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync

— Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/6506#issuecomment-1166351782, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD4RLIGVOGKFRQ3NK4UDMX3VQ5PPTANCNFSM4UAIXHSQ . You are receiving this because you were mentioned.Message ID: @.***>

goors commented 2 years ago

Do you have any other ingress ?

No, I do not.

Can you reach this ingress from outside your cluster ?

No, I can not since I am on VPN where NodePort of 32211 in this case, is not accessible for ingress service.

When I do curl 192.168.253.113:32211 while inside server, I can see nginx (controller page).

Ip 192.168.253.113 is assigned to ingress test-ingress. @dinobagio

Update:

If I use host network of course on deployment then all is good but i don't want to do this. What is the point then of the ClusterIP :)

Update:

On 192.168.253.113 (that should be ingress) I can see nginx process, but I don't see 80 port opened or 443. Only time that is opened if I use deployment with host network.

I am using v3.23/manifests/calico.yaml.

This is output of ps aux | grep nginx:

101      37592  0.0  0.0    204     4 ?        Ss   16:43   0:00 /usr/bin/dumb-init -- /nginx-ingress-controller --election-
id=ingress-controller-leader --controller-class=k8s.io/ingress-nginx --ingress-class=nginx --configmap=ingress-nginx/ingress-nginx-controller --validating-webhook=:8443 --validating-webhook-certificate=/usr/local/certificates/cert --validating-webhook-key=/usr/local/certificates/key
101      37605  0.1  1.0 742976 41956 ?        Ssl  16:43   0:02 /nginx-ingress-controller --election-id=ingress-controller-leader --controller-class=k8s.io/ingress-nginx --ingress-class=nginx --configmap=ingress-nginx/ingress-nginx-controller --validating-webhook=:8443 --validating-webhook-certificate=/usr/local/certificates/cert --validating-webhook-key=/usr/local/certificates/key
101      37673  0.0  0.9 145588 36696 ?        S    16:43   0:00 nginx: master process /usr/bin/nginx -c /etc/nginx/nginx.conf
root     39499  0.0  0.1   8852  5184 ?        Ss   16:45   0:00 nginx: master process nginx -g daemon off;
root     39545  0.0  0.1   8852  5992 ?        Ss   16:45   0:00 nginx: master process nginx -g daemon off;
101      39591  0.0  0.0   9300  2416 ?        S    16:45   0:00 nginx: worker process
101      39592  0.0  0.0   9300  2416 ?        S    16:45   0:00 nginx: worker process
101      39593  0.0  0.0   9300  2416 ?        S    16:45   0:00 nginx: worker process
101      39594  0.0  0.0   9300  2416 ?        S    16:45   0:00 nginx: worker process
101      39597  0.0  0.0   9300  2540 ?        S    16:45   0:00 nginx: worker process
101      39598  0.0  0.0   9300  2540 ?        S    16:45   0:00 nginx: worker process
101      39599  0.0  0.0   9300  2540 ?        S    16:45   0:00 nginx: worker process
101      39600  0.0  0.0   9300  2540 ?        S    16:45   0:00 nginx: worker process
101      40735  0.0  1.0 157584 40988 ?        Sl   16:47   0:00 nginx: worker process
101      40736  0.0  1.0 157584 40980 ?        Sl   16:47   0:00 nginx: worker process
101      40737  0.0  1.0 157584 40972 ?        Sl   16:47   0:00 nginx: worker process
101      40738  0.0  1.0 157584 40976 ?        Sl   16:47   0:00 nginx: worker process
101      40739  0.0  0.7 143592 29480 ?        S    16:47   0:00 nginx: cache manager process
nderiko+ 48533  0.0  0.0 216072   820 pts/0    R+   17:04   0:00 grep --color=auto nginx

Versions

Client Version: v1.24.2
Kustomize Version: v4.5.4
Server Version: v1.24.2

I have no idea how to debug this further.

dinobagio commented 2 years ago

@goors maybe you can give us the output of kubectl decribe for every item of your deployment, i.e. pod, service, ingress, ingress controller ? Then the output of ip a/ip r, netstat -ltpn on your host, so that we can understand what is your network setup...

goors commented 2 years ago

kubectl describe ingress echo-ingress

Name:             echo-ingress
Labels:           <none>
Namespace:        default
Address:          192.168.253.113
Ingress Class:    nginx
Default backend:  example-service:5000 (172.18.120.7:80,172.18.198.197:80,172.18.198.198:80)
Rules:
  Host                Path  Backends
  ----                ----  --------
  cryptotestkube.com  
                      /service0(/|$)(.*)   example-service:5000 (172.18.120.7:80,172.18.198.197:80,172.18.198.198:80)
Annotations:          nginx.ingress.kubernetes.io/rewrite-target: /$2
                      nginx.ingress.kubernetes.io/ssl-redirect: false
                      nginx.ingress.kubernetes.io/use-regex: true
Events:               <none>

kubectl describe service example-service

Name:              example-service
Namespace:         default
Labels:            <none>
Annotations:       <none>
Selector:          app=nginx-web
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.96.124.151
IPs:               10.96.124.151
Port:              <unset>  5000/TCP
TargetPort:        80/TCP
Endpoints:         172.18.120.7:80,172.18.198.197:80,172.18.198.198:80
Session Affinity:  None
Events:            <none>

kubectl describe deployment nginx-web

Name:                   nginx-web
Namespace:              default
CreationTimestamp:      Sat, 25 Jun 2022 16:45:13 -0400
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=nginx-web
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx-web
  Containers:
   nginx:
    Image:        nginx
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-web-7b8556bdd (3/3 replicas created)
Events:          <none>

kubectl describe pod nginx-web-7b8556bdd-ndvmx

Name:         nginx-web-7b8556bdd-ndvmx
Namespace:    default
Priority:     0
Node:         slchvdvcydtb001/192.168.253.113
Start Time:   Sat, 25 Jun 2022 16:45:14 -0400
Labels:       app=nginx-web
              pod-template-hash=7b8556bdd
Annotations:  cni.projectcalico.org/containerID: fc338d3d37c7e40b7c47fce6dc13cfbf218bb10dd336d3f5e5682cdfd4b1e247
              cni.projectcalico.org/podIP: 172.18.198.198/32
              cni.projectcalico.org/podIPs: 172.18.198.198/32
Status:       Running
IP:           172.18.198.198
IPs:
  IP:           172.18.198.198
Controlled By:  ReplicaSet/nginx-web-7b8556bdd
Containers:
  nginx:
    Container ID:   containerd://8bc3818e7597427a19d469d6a51b7bf4c89cc47de32619b49627eaa735b95072
    Image:          nginx
    Image ID:       docker.io/library/nginx@sha256:10f14ffa93f8dedf1057897b745e5ac72ac5655c299dade0aa434c71557697ea
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sat, 25 Jun 2022 16:45:15 -0400
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cgtlq (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-cgtlq:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

kubectl describe service ingress-nginx-controller -n ingress-nginx

Name:                     ingress-nginx-controller
Namespace:                ingress-nginx
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=1.2.1
Annotations:              <none>
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.121.54
IPs:                      10.108.121.54
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  32211/TCP
Endpoints:                172.18.198.196:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  30321/TCP
Endpoints:                172.18.198.196:443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

kubectl describe pod ingress-nginx-controller-8fb79d7df-kkmqc -n ingress-nginx

Name:         ingress-nginx-controller-8fb79d7df-kkmqc
Namespace:    ingress-nginx
Priority:     0
Node:         slchvdvcydtb001/192.168.253.113
Start Time:   Sat, 25 Jun 2022 16:42:35 -0400
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              pod-template-hash=8fb79d7df
Annotations:  cni.projectcalico.org/containerID: d9eb37ec356843852d2f02695268bf7b24b326c4203d4b6a931044c730f0584e
              cni.projectcalico.org/podIP: 172.18.198.196/32
              cni.projectcalico.org/podIPs: 172.18.198.196/32
Status:       Running
IP:           172.18.198.196
IPs:
  IP:           172.18.198.196
Controlled By:  ReplicaSet/ingress-nginx-controller-8fb79d7df
Containers:
  controller:
    Container ID:  containerd://0e35ff6bc2698de6eb226f90e0003d316be82a4d1f4776e9b047eb06297441f8
    Image:         registry.k8s.io/ingress-nginx/controller:v1.2.1@sha256:5516d103a9c2ecc4f026efbd4b40662ce22dc1f824fb129ed121460aaa5c47f8
    Image ID:      registry.k8s.io/ingress-nginx/controller@sha256:5516d103a9c2ecc4f026efbd4b40662ce22dc1f824fb129ed121460aaa5c47f8
    Ports:         80/TCP, 443/TCP, 8443/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --election-id=ingress-controller-leader
      --controller-class=k8s.io/ingress-nginx
      --ingress-class=nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
    State:          Running
      Started:      Sat, 25 Jun 2022 16:43:01 -0400
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      100m
      memory:   90Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-controller-8fb79d7df-kkmqc (v1:metadata.name)
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r29nb (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  kube-api-access-r29nb:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:15:5d:7e:a8:14 brd ff:ff:ff:ff:ff:ff
    inet 192.168.253.111/24 brd 192.168.253.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::24a8:cb1a:8559:16e8/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:60:15:4e:cc brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
4: datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether d6:71:ce:73:cf:68 brd ff:ff:ff:ff:ff:ff
8: vethwe-datapath@vethwe-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master datapath state UP group default 
    link/ether 82:c0:a1:be:fa:7a brd ff:ff:ff:ff:ff:ff
9: vethwe-bridge@vethwe-datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default 
    link/ether 16:93:62:c4:1e:9e brd ff:ff:ff:ff:ff:ff
10: vxlan-6784: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65535 qdisc noqueue master datapath state UNKNOWN group default qlen 1000
    link/ether 8e:6e:73:bc:8d:e5 brd ff:ff:ff:ff:ff:ff
11: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 172.18.4.64/32 scope global tunl0
       valid_lft forever preferred_lft forever
47: cali85ec3990d27@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-27d387eb-af59-2f21-bdaa-840ad73c496d
65: calia482be09767@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-4e066847-e7ad-5170-e79e-4807a44120a2
66: cali034076753d1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-04601792-93ba-0769-c09a-b51d7fb5ef19

ip r

default via 192.168.253.254 dev eth0 proto static metric 100 
blackhole 10.1.4.64/26 proto 80 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
blackhole 172.18.4.64/26 proto bird 
172.18.4.65 dev calia482be09767 scope link 
172.18.4.66 dev cali034076753d1 scope link 
192.168.253.0/24 dev eth0 proto kernel scope link src 192.168.253.111 metric 100 

kubectl get nodes

NAME              STATUS   ROLES           AGE    VERSION
slchvdvcybld001   Ready    control-plane   140m   v1.24.2
slchvdvcydtb001   Ready    <none>          136m   v1.24.2
slchvdvcytst001   Ready    <none>          137m   v1.24.2

I have no idea why that roles are none and why there is no MASTER? Is that normal?

@dinobagio

goors commented 2 years ago

I can see

Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/calico/bird.ctl: connect: connection refused

./calicoctl node status

Calico process is running.

IPv4 BGP status
+--------------+-------------------+-------+----------+---------+
| PEER ADDRESS |     PEER TYPE     | STATE |  SINCE   |  INFO   |
+--------------+-------------------+-------+----------+---------+
| 10.47.0.0    | node-to-node mesh | start | 00:38:20 | Connect |
| 10.44.0.0    | node-to-node mesh | start | 00:38:20 | Connect |
+--------------+-------------------+-------+----------+---------+

IPv6 BGP status
No IPv6 peers found.

I have no clue how to fix that.

goors commented 2 years ago

That is resolved by

kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=interface=eth.*

It says Estabilsihed now and there are no more errors but still I have that Scheduled for sync

goors commented 2 years ago

This is only error that is left in ingress controller pod @dinobagio : MountVolume.SetUp failed for volume "webhook-cert" : secret "ingress-nginx-admission" not found

kubectl get secrets --all-namespaces gives me

NAMESPACE              NAME                              TYPE                            DATA   AGE
ingress-nginx          ingress-nginx-admission           Opaque                          3      5m36s
kube-system            bootstrap-token-free5q            bootstrap.kubernetes.io/token   7      4h34m
kubernetes-dashboard   kubernetes-dashboard-certs        Opaque                          0      104m
kubernetes-dashboard   kubernetes-dashboard-csrf         Opaque                          1      104m
kubernetes-dashboard   kubernetes-dashboard-key-holder   Opaque                          2      104m
dinobagio commented 2 years ago

@goors maybe you should redeploy your ingress controller now that you have your 3 nodes ready with I hope 3 roles (cp, etcd, w) among them ?

goors commented 2 years ago

@dinobagio all I can do is kubeadm init and I have no idea how to force someone to master or similar.

I did kubectl label node on each node with node-role.kubernetes.io/master=master and so on.

But I am not sure if that is the same thing.

dinobagio commented 2 years ago

@goors in this case how do you define what role you give to each oh these 3 nodes ? Why do you even want to use 3 nodes instead of one that would do the 3 ? Maybe thats why.. I'm not an expert, unfortunately, and the issue solved when I deleted my second ingress.

goors commented 2 years ago

This all looks to me some firewall issue, but I am not sure what ports are needed because I do not understand that part when you spin up ingress, what happens after that and what that message is Scheduled for sync.

dinobagio commented 2 years ago

From my understanding, "waiting for sync" means that it's waiting for one ingress stream. But if you cant reach it it explains why you see that. What is weird in your case is that you don't have host ports in your describes, I've checked my cluster, and every describe shows that my nginx-controller has host ports (80,443)

    Ports:       80/TCP, 443/TCP, 8443/TCP
    Host Ports:  80/TCP, 443/TCP, 0/TCP

If I were you I would redo the kubeadm init on one node first. then test the deployment. Then add nodes.

Le dim. 26 juin 2022 à 15:05, Nikola @.***> a écrit :

This all looks to me some firewall issue, but I am not sure what ports are needed because I do not understand that part when you spin up ingress, what happens after that and what that message is Scheduled for sync.

— Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/6506#issuecomment-1166526801, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD4RLICPYAY55PO635RNAADVRBIRXANCNFSM4UAIXHSQ . You are receiving this because you were mentioned.Message ID: @.***>

-- https://time-planet.link/cta-signature-mail

goors commented 2 years ago

@dinobagio I tried, it same thing.

Ports:         80/TCP, 443/TCP, 8443/TCP
Host Ports:    0/TCP, 0/TCP, 0/TCP

sudo ss -tlpn gives me:

LISTEN                  0                       128                                            127.0.0.1:199                                          0.0.0.0:*                     users:(("snmpd",pid=853,fd=9))                            
LISTEN                  0                       2048                                           127.0.0.1:10248                                        0.0.0.0:*                     users:(("kubelet",pid=7926,fd=34))                        
LISTEN                  0                       2048                                           127.0.0.1:10249                                        0.0.0.0:*                     users:(("kube-proxy",pid=8046,fd=17))                     
LISTEN                  0                       2048                                           127.0.0.1:9099                                         0.0.0.0:*                     users:(("calico-node",pid=8505,fd=7))                     
LISTEN                  0                       2048                                     192.168.253.111:2379                                         0.0.0.0:*                     users:(("etcd",pid=7820,fd=9))                            
LISTEN                  0                       2048                                           127.0.0.1:2379                                         0.0.0.0:*                     users:(("etcd",pid=7820,fd=8))                            
LISTEN                  0                       2048                                     192.168.253.111:2380                                         0.0.0.0:*                     users:(("etcd",pid=7820,fd=7))                            
LISTEN                  0                       2048                                           127.0.0.1:2381                                         0.0.0.0:*                     users:(("etcd",pid=7820,fd=13))                           
LISTEN                  0                       2048                                           127.0.0.1:45805                                        0.0.0.0:*                     users:(("containerd",pid=22114,fd=9))                     
LISTEN                  0                       2048                                           127.0.0.1:10257                                        0.0.0.0:*                     users:(("kube-controller",pid=7815,fd=7))                 
LISTEN                  0                       2048                                           127.0.0.1:10259                                        0.0.0.0:*                     users:(("kube-scheduler",pid=7821,fd=7))                  
LISTEN                  0                       128                                              0.0.0.0:22                                           0.0.0.0:*                     users:(("sshd",pid=1142,fd=5))                            
LISTEN                  0                       2048                                                   *:10250                                              *:*                     users:(("kubelet",pid=7926,fd=12))                        
LISTEN                  0                       2048                                                   *:6443                                               *:*                     users:(("kube-apiserver",pid=7751,fd=7))                  
LISTEN                  0                       2048                                                   *:10256                                              *:*                     users:(("kube-proxy",pid=8046,fd=12))                     
LISTEN                  0                       128                                                 [::]:22                                              [::]:*                     users:(("sshd",pid=1142,fd=7))                            
LISTEN                  0                       128                                                    *:9090                                               *:*                     users:(("systemd",pid=1,fd=57)) 

I dont see 80 or 443....

Do you use like bare metal as well? If yes what version of calico did you use and ingress-controller?

I used v3.23/manifests/calico.yaml and this baremetal/1.23/deploy.yaml for controller.

goors commented 2 years ago

I have changed ingress controller config.

I have set hostNetwork: true on Deployment part of controller config.

Here is paste bin if anyone have this same problem: https://pastebin.com/uwwtUtnU.

So here how it is:

  1. all my services are type clusterIp
  2. my ingress is hitting service as it should be
  3. all requests/responses are from ingress.

@dinobagio thank you.

goors commented 2 years ago

Also there is a warrning

MountVolume.SetUp failed for volume "webhook-cert" : secret "ingress-nginx-admission" not found

This can not be correct, maybe the order of commands in yml file are wrong since there is ingress-nginx-admission secret in ingress-nginx namespace.

dinobagio commented 2 years ago

@goors excellent news...glad I could help. By the way I dont do kubeadm, I do rke clusters only up to now. It's aol done at once...seems much easier than kubeadm from what I can see.

dinobagio commented 2 years ago

It might be a warning that appeared in the past of the process but is not current any more. A log of the error somehow.

On Sun, 26 Jun 2022, 21:51 Nikola, @.***> wrote:

Also there is a warrning

MountVolume.SetUp failed for volume "webhook-cert" : secret "ingress-nginx-admission" not found

This can not be correct, maybe the order of commands in yml file are wrong since there is ingress-nginx-admission secret in ingress-nginx namespace.

— Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/6506#issuecomment-1166628471, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD4RLIBMWBF3PP76KYBS5XTVRCYFJANCNFSM4UAIXHSQ . You are receiving this because you were mentioned.Message ID: @.***>

mimani68 commented 1 year ago

Facing the same issue, any help?!

kahirokunn commented 1 year ago

I have same issue.

monaka commented 1 year ago

Just in my case. I got a similar issue on AKS 1.24. I guess it is a side effect of https://github.com/kubernetes/ingress-nginx/issues/9601 . The issue here is fixed after adding an annotation to svc/ingress-nginx-controller.

It may be caused by health probes between a load-balancer and svc/ingress-nginx-controller

kkorniszuk commented 1 year ago

Looks like I'm facing the same issue:

  Normal  Sync    10m (x7 over 59m)  nginx-ingress-controller  Scheduled for sync

This is the only ingress on my Kubernetes node, no other could interference with it. Any ideas?

AshtarCodes commented 1 year ago

We are having similar issue and it seems to be cause by different ingresses using same host name. Maybe check whether you have a duplicate ingress resource defined in other namespace.

Thanks for this comment. This was exactly my issue. I had incorrectly deleted my previous deployment in a different namespace and didn't realize the ingress object was left over. This issue was bothering me for the past 4 days.

luatnd commented 1 year ago

I'm using minikube for local dev and today I got the same issue. Tried all the suggestions above with no luck!

Just a hello-world ingress: https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/#create-a-second-deployment

Try to access from host machine and got timeout:

curl -v --resolve dev.mydomain.com:443:"$(minikube ip)" https://dev.mydomain.com
curl -v --resolve dev.mydomain.com:80:"$(minikube ip)" http://dev.mydomain.com

But ssh into ingress-nginx-controller-* worked:

# inside ingress-nginx-controller-*
curl -v --resolve dev.mydomain.com:80:127.0.0.1 http://dev.mydomain.com

I'm sure that the minikube tunnel is working properly because I also have another subdomain in the past, and it'is working perfectly.

Still finding the solution, hope that after a long sleep, everything will be find

filipmilo commented 9 months ago

+1 on this

ArtursTrubacs commented 9 months ago

Also for me the same problem, also stuck ` k get ing NAME CLASS HOSTS ADDRESS PORTS AGE vaultwarden-ingress nginx vaultwarden-dns-name.lv 10.0.191.131,10.0.191.132,10.0.191.133,10.0.191.134 80, 443 18m

k describe ing vaultwarden-ingress Name: vaultwarden-ingress Labels: Namespace: vaultwarden Address: 10.0.191.131,10.0.191.132,10.0.191.133,10.0.191.134 Ingress Class: nginx Default backend: TLS: vaultwarden-tls-secret terminates vaultwarden-dns-name.lv Rules: Host Path Backends


vaultwarden-t.monta.energo.lv / vaultwarden-service:80 (10.42.2.141:80) Annotations: field.cattle.io/publicEndpoints: [{"addresses":["10.0.191.131"],"port":443,"protocol":"HTTPS","serviceName":"vaultwarden:vaultwarden-service","ingressName":"vaultwarden:va... nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/ssl-redirect: true Events: Type Reason Age From Message


Normal Sync 18m (x3 over 18m) nginx-ingress-controller Scheduled for sync Normal Sync 18m (x3 over 18m) nginx-ingress-controller Scheduled for sync Normal Sync 18m (x3 over 18m) nginx-ingress-controller Scheduled for sync Normal Sync 18m (x3 over 18m) nginx-ingress-controller Scheduled for sync `

ollyde commented 8 months ago

Did anyone figure this out?

We have 30+ ingresses for many services, first time I've seen this error. Don't know how to resolve it. Cannot deploy APIs :-D

atljoseph commented 4 months ago

Same issue. Added ingress that was just like all the rest... And this one is stuck in Scheduled for sync.

longwuyuan commented 4 months ago

There is no data posted here that can be analyzed by others. I created a new cluster on kind and deployed controller and a sample app without problems. So simple default config and simple defaults based ingress works.

Create a new issue and answer questions asked in the new bug report template. That way others can analyze the data posted as issue description.