kubernetes / ingress-nginx

Ingress-NGINX Controller for Kubernetes
https://kubernetes.github.io/ingress-nginx/
Apache License 2.0
17.28k stars 8.21k forks source link

deploy ingress-nginx on kind with error 'iptables Couldn't load match `multiport':No such file or directory' #7060

Closed c0nstantien closed 3 years ago

c0nstantien commented 3 years ago

I tried to deploy ingress-nginx on my kind, referring to kind-ingress, but I found the pod with name ingress-nginx -controller can't be started, the error is as follows:

$ kubectl --namespace ingress-nginx get all
NAME                                            READY   STATUS              RESTARTS   AGE
pod/ingress-nginx-admission-create-7l6np        0/1     Completed           0          3m16s
pod/ingress-nginx-admission-patch-vwrsg         0/1     Completed           2          3m16s
pod/ingress-nginx-controller-77758b5777-gm2m8   0/1     ContainerCreating   0          3m16s

NAME                                         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             NodePort    10.96.20.221   <none>        80:30274/TCP,443:32680/TCP   3m17s
service/ingress-nginx-controller-admission   ClusterIP   10.96.48.206   <none>        443/TCP                      3m17s

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   0/1     1            0           3m17s

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-77758b5777   1         1         0       3m17s

NAME                                       COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   1/1           31s        3m16s
job.batch/ingress-nginx-admission-patch    1/1           46s        3m16s

$ kubectl --namespace ingress-nginx describe pod ingress-nginx-controller-77758b5777-gm2m8
Name:           ingress-nginx-controller-77758b5777-gm2m8
Namespace:      ingress-nginx
Priority:       0
Node:           kind-control-plane/172.18.0.2
Start Time:     Tue, 20 Apr 2021 11:03:31 +0800
Labels:         app.kubernetes.io/component=controller
                app.kubernetes.io/instance=ingress-nginx
                app.kubernetes.io/name=ingress-nginx
                pod-template-hash=77758b5777
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  ReplicaSet/ingress-nginx-controller-77758b5777
Containers:
  controller:
    Container ID:
    Image:         k8s.gcr.io/ingress-nginx/controller:v0.45.0@sha256:c4390c53f348c3bd4e60a5dd6a11c35799ae78c49388090140b9d72ccede1755
    Image ID:
    Ports:         80/TCP, 443/TCP, 8443/TCP
    Host Ports:    80/TCP, 443/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --election-id=ingress-controller-leader
      --ingress-class=nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
      --publish-status-address=localhost
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:      100m
      memory:   90Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-controller-77758b5777-gm2m8 (v1:metadata.name)
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from ingress-nginx-token-c2h5j (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  ingress-nginx-token-c2h5j:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-token-c2h5j
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  ingress-ready=true
                 kubernetes.io/os=linux
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age                From               Message
  ----     ------                  ----               ----               -------
  Normal   Scheduled               75s                default-scheduler  Successfully assigned ingress-nginx/ingress-nginx-controller-77758b5777-gm2m8 to kind-control-plane
  Warning  FailedMount             60s (x6 over 75s)  kubelet            MountVolume.SetUp failed for volume "webhook-cert" : secret "ingress-nginx-admission" not found
  Warning  FailedCreatePodSandBox  43s                kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "35e29c109aa02ba59fe1392111e8c36fe96721b086eb16ca24c95a81223c4163": unable to setup DNAT: running [/usr/sbin/iptables -t nat -C CNI-HOSTPORT-DNAT -m comment --comment dnat name: "kindnet" id: "35e29c109aa02ba59fe1392111e8c36fe96721b086eb16ca24c95a81223c4163" -m multiport -p tcp --destination-ports 80,443 -j CNI-DN-3ec3ca8e0d1528137d032 --wait]: exit status 2: iptables v1.8.5 (legacy): Couldn't load match `multiport':No such file or directory

Try `iptables -h' or 'iptables --help' for more information.
  Warning  FailedCreatePodSandBox  28s  kubelet  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "0b4b6068041e2fc4668d74277ef29eb27427bf89fa32723228b3d16135e53694": unable to setup DNAT: running [/usr/sbin/iptables -t nat -C CNI-HOSTPORT-DNAT -m comment --comment dnat name: "kindnet" id: "0b4b6068041e2fc4668d74277ef29eb27427bf89fa32723228b3d16135e53694" -m multiport -p tcp --destination-ports 80,443 -j CNI-DN-84ff509f6d974a685046e --wait]: exit status 2: iptables v1.8.5 (legacy): Couldn't load match `multiport':No such file or directory

Try `iptables -h' or 'iptables --help' for more information.
  Warning  FailedCreatePodSandBox  13s  kubelet  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "4106961fd03d1be2e1a9dee2aee46b267bed78fb709aee9044423cbb2c6f8a08": unable to setup DNAT: running [/usr/sbin/iptables -t nat -C CNI-HOSTPORT-DNAT -m comment --comment dnat name: "kindnet" id: "4106961fd03d1be2e1a9dee2aee46b267bed78fb709aee9044423cbb2c6f8a08" -m multiport -p tcp --destination-ports 80,443 -j CNI-DN-185ed44625bfb715a1bcf --wait]: exit status 2: iptables v1.8.5 (legacy): Couldn't load match `multiport':No such file or directory

Try `iptables -h' or 'iptables --help' for more information.

Has anyone encountered a similar situation? thanks. /triage support

k8s-ci-robot commented 3 years ago

@wen11497110: The label(s) triage/support cannot be applied, because the repository doesn't have them.

In response to [this](https://github.com/kubernetes/ingress-nginx/issues/7060): > > > >I tried to deploy ingress-nginx on my kind, referring to [kind-ingress](https://kind.sigs.k8s.io/docs/user/ingress/#ingress-nginx), but I found the pod with name `ingress-nginx -controller` can't be started, the error is as follows: >``` > Warning FailedCreatePodSandBox 4s (x3 over 28s) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c76ceacd8ca37b3b7650b9aeda8a1389aff7a298b0978d1e8d7e713eddce9452": unable to setup DNAT: running [/usr/sbin/iptables -t nat -C CNI-HOSTPORT-DNAT -m comment --comment dnat name: "kindnet" id: "c76ceacd8ca37b3b7650b9aeda8a1389aff7a298b0978d1e8d7e713eddce9452" -m multiport -p tcp --destination-ports 80,443 -j CNI-DN-5a16eed73f8c57dbb14e6 --wait]: exit status 2: iptables v1.8.5 (legacy): Couldn't load match `multiport':No such file or directory > >Try `iptables -h' or 'iptables --help' for more information. >``` >Has anyone encountered a similar situation? thanks. >/triage support > Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
longwuyuan commented 3 years ago

Explain where did you run kinD with information like;

c0nstantien commented 3 years ago

Explain where did you run kinD with information like;

  • uname -a
  • cat /etc/os/release
  • ifconfig -a
  • ufw/firewalld/selinux/apparmore/etc details
  • etc etc

kind runs on arch linux docker, and firewalld/selinux disabled

$ uname -a
Linux wen-arch 5.11.13-arch1-1 #1 SMP PREEMPT Sat, 10 Apr 2021 20:47:14 +0000 x86_64 GNU/Linux

$ docker version
Client:
 Version:           20.10.6
 API version:       1.41
 Go version:        go1.16.3
 Git commit:        370c28948e
 Built:             Mon Apr 12 14:10:41 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server:
 Engine:
  Version:          20.10.5
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16
  Git commit:       363e9a88a1
  Built:            Wed Mar  3 16:51:28 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.4.4
  GitCommit:        05f951a3781f4f2c1911b05e61c160e9c30eaa8e.m
 runc:
  Version:          1.0.0-rc93
  GitCommit:        12644e614e25b05da6fd08a38ffa0cfe1903fdec
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
longwuyuan commented 3 years ago

You did not show ifconfig -a. Please try to run a container using nginx:alpine image that forwards hostPort 80 to containerPort 80. Please make sure you show al commands ouputs logs etc. Please attempt this after dmesg -c. Then also show dmesg after attempt to run docker container as suggested above.

Thanks, ; Long

On Tue, 20 Apr, 2021, 6:56 AM Constantine, @.***> wrote:

Explain where did you run kinD with information like;

  • uname -a
  • cat /etc/os/release
  • ifconfig -a
  • ufw/firewalld/selinux/apparmore/etc details
  • etc etc

kind runs on arch linux docker, and firewalld/selinux disabled

$ uname -a Linux wen-arch 5.11.13-arch1-1 #1 SMP PREEMPT Sat, 10 Apr 2021 20:47:14 +0000 x86_64 GNU/Linux

$ docker version Client: Version: 20.10.6 API version: 1.41 Go version: go1.16.3 Git commit: 370c28948e Built: Mon Apr 12 14:10:41 2021 OS/Arch: linux/amd64 Context: default Experimental: true

Server: Engine: Version: 20.10.5 API version: 1.41 (minimum version 1.12) Go version: go1.16 Git commit: 363e9a88a1 Built: Wed Mar 3 16:51:28 2021 OS/Arch: linux/amd64 Experimental: false containerd: Version: v1.4.4 GitCommit: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e.m runc: Version: 1.0.0-rc93 GitCommit: 12644e614e25b05da6fd08a38ffa0cfe1903fdec docker-init: Version: 0.19.0 GitCommit: de40ad0

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/7060#issuecomment-822901827, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWTMWNAC6Z6PRJL4G4DTJTJ23ANCNFSM43FVYXUQ .

c0nstantien commented 3 years ago

You did not show ifconfig -a. Please try to run a container using nginx:alpine image that forwards hostPort 80 to containerPort 80. Please make sure you show al commands ouputs logs etc. Please attempt this after dmesg -c. Then also show dmesg after attempt to run docker container as suggested above. Thanks, ; Long On Tue, 20 Apr, 2021, 6:56 AM Constantine, @.***> wrote: Explain where did you run kinD with information like; - uname -a - cat /etc/os/release - ifconfig -a - ufw/firewalld/selinux/apparmore/etc details - etc etc kind runs on arch linux docker, and firewalld/selinux disabled $ uname -a Linux wen-arch 5.11.13-arch1-1 #1 SMP PREEMPT Sat, 10 Apr 2021 20:47:14 +0000 x86_64 GNU/Linux $ docker version Client: Version: 20.10.6 API version: 1.41 Go version: go1.16.3 Git commit: 370c28948e Built: Mon Apr 12 14:10:41 2021 OS/Arch: linux/amd64 Context: default Experimental: true Server: Engine: Version: 20.10.5 API version: 1.41 (minimum version 1.12) Go version: go1.16 Git commit: 363e9a88a1 Built: Wed Mar 3 16:51:28 2021 OS/Arch: linux/amd64 Experimental: false containerd: Version: v1.4.4 GitCommit: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e.m runc: Version: 1.0.0-rc93 GitCommit: 12644e614e25b05da6fd08a38ffa0cfe1903fdec docker-init: Version: 0.19.0 GitCommit: de40ad0 — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#7060 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWTMWNAC6Z6PRJL4G4DTJTJ23ANCNFSM43FVYXUQ .

Thank you for your reply. There are another containers running on my docker and they all work normally. I think there should be no problem with my network.

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:15:5d:9c:c2:00 brd ff:ff:ff:ff:ff:ff
    inet 172.26.107.213/20 brd 172.26.111.255 scope global dynamic noprefixroute eth0
       valid_lft 77254sec preferred_lft 77254sec
    inet6 fe80::f704:c9d2:2983:f281/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:15:5d:9c:c2:02 brd ff:ff:ff:ff:ff:ff
    inet 172.30.0.101/16 brd 172.30.255.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::b826:fc2d:2724:b483/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:5d:99:c5:5c brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:5dff:fe99:c55c/64 scope link
       valid_lft forever preferred_lft forever
6: veth1bb46ec@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether aa:da:22:3e:ba:6b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::a8da:22ff:fe3e:ba6b/64 scope link
       valid_lft forever preferred_lft forever
7: br-f404d84e8300: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:6c:4a:8d:25 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global br-f404d84e8300
       valid_lft forever preferred_lft forever
    inet6 fc00:f853:ccd:e793::1/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::42:6cff:fe4a:8d25/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::1/64 scope link
       valid_lft forever preferred_lft forever
41: vethd62f702@if40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f404d84e8300 state UP group default
    link/ether 56:ae:70:ee:14:96 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::54ae:70ff:feee:1496/64 scope link
       valid_lft forever preferred_lft forever

kind network :

$ docker network inspect kind
[
    {
        "Name": "kind",
        "Id": "f404d84e8300692028d610e584d671ca25164134404f8c560388bd32ba3c3ead",
        "Created": "2021-04-12T11:33:18.928978809+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": true,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                },
                {
                    "Subnet": "fc00:f853:ccd:e793::/64"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "53ff5c26a99dce31c51edfdaa7caa4b7b0e7c26ecc7dd50dd22fb066dd72dea3": {
                "Name": "kind-control-plane",
                "EndpointID": "11071d95d28c937d6a4580336a8e02dfe6c169e9ef53b38c7871a1c99dda6de7",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": "fc00:f853:ccd:e793::2/64"
            }
        },
        "Options": {
            "com.docker.network.bridge.enable_ip_masquerade": "true"
        },
        "Labels": {}
    }
]

kind container network:

root@kind-control-plane:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: veth31e04254@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 2e:fb:fc:26:c6:71 brd ff:ff:ff:ff:ff:ff link-netns cni-2af05b3b-818d-fc48-ba05-e62fb3fa1605
    inet 10.244.0.1/32 scope global veth31e04254
       valid_lft forever preferred_lft forever
3: vethe905b52a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 4e:c8:99:16:61:79 brd ff:ff:ff:ff:ff:ff link-netns cni-6be11e62-2a85-1f5f-4f64-d9d966d0dcef
    inet 10.244.0.1/32 scope global vethe905b52a
       valid_lft forever preferred_lft forever
4: veth515adf48@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether ee:7d:50:ed:73:21 brd ff:ff:ff:ff:ff:ff link-netns cni-9a85ed22-467f-769a-956e-c511863ca9af
    inet 10.244.0.1/32 scope global veth515adf48
       valid_lft forever preferred_lft forever
40: eth0@if41: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fc00:f853:ccd:e793::2/64 scope global nodad
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe12:2/64 scope link
       valid_lft forever preferred_lft forever

dmesg and docker logs kind-control-plane no error log

longwuyuan commented 3 years ago

Sudo netstat -lntp

Thanks, ; Long

On Tue, 20 Apr, 2021, 8:58 AM Constantine, @.***> wrote:

You did not show ifconfig -a. Please try to run a container using nginx:alpine image that forwards hostPort 80 to containerPort 80. Please make sure you show al commands ouputs logs etc. Please attempt this after dmesg -c. Then also show dmesg after attempt to run docker container as suggested above. Thanks, ; Long … <#m-648903302927611940> On Tue, 20 Apr, 2021, 6:56 AM Constantine, @.***> wrote: Explain where did you run kinD with information like; - uname -a - cat /etc/os/release - ifconfig -a - ufw/firewalld/selinux/apparmore/etc details - etc etc kind runs on arch linux docker, and firewalld/selinux disabled $ uname -a Linux wen-arch 5.11.13-arch1-1 #1 https://github.com/kubernetes/ingress-nginx/pull/1 SMP PREEMPT Sat, 10 Apr 2021 20:47:14 +0000 x86_64 GNU/Linux $ docker version Client: Version: 20.10.6 API version: 1.41 Go version: go1.16.3 Git commit: 370c28948e Built: Mon Apr 12 14:10:41 2021 OS/Arch: linux/amd64 Context: default Experimental: true Server: Engine: Version: 20.10.5 API version: 1.41 (minimum version 1.12) Go version: go1.16 Git commit: 363e9a88a1 Built: Wed Mar 3 16:51:28 2021 OS/Arch: linux/amd64 Experimental: false containerd: Version: v1.4.4 GitCommit: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e.m runc: Version: 1.0.0-rc93 GitCommit: 12644e614e25b05da6fd08a38ffa0cfe1903fdec docker-init: Version: 0.19.0 GitCommit: de40ad0 — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#7060 (comment) https://github.com/kubernetes/ingress-nginx/issues/7060#issuecomment-822901827>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWTMWNAC6Z6PRJL4G4DTJTJ23ANCNFSM43FVYXUQ .

Thank you for your reply. There are another containers running on my docker and they all work normally. I think there should be no problem with my network.

$ ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

   valid_lft forever preferred_lft forever

inet6 ::1/128 scope host

   valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

link/ether 00:15:5d:9c:c2:00 brd ff:ff:ff:ff:ff:ff

inet 172.26.107.213/20 brd 172.26.111.255 scope global dynamic noprefixroute eth0

   valid_lft 77254sec preferred_lft 77254sec

inet6 fe80::f704:c9d2:2983:f281/64 scope link noprefixroute

   valid_lft forever preferred_lft forever

3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

link/ether 00:15:5d:9c:c2:02 brd ff:ff:ff:ff:ff:ff

inet 172.30.0.101/16 brd 172.30.255.255 scope global noprefixroute eth1

   valid_lft forever preferred_lft forever

inet6 fe80::b826:fc2d:2724:b483/64 scope link noprefixroute

   valid_lft forever preferred_lft forever

4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default

link/ether 02:42:5d:99:c5:5c brd ff:ff:ff:ff:ff:ff

inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0

   valid_lft forever preferred_lft forever

inet6 fe80::42:5dff:fe99:c55c/64 scope link

   valid_lft forever preferred_lft forever

6: @.***: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default

link/ether aa:da:22:3e:ba:6b brd ff:ff:ff:ff:ff:ff link-netnsid 0

inet6 fe80::a8da:22ff:fe3e:ba6b/64 scope link

   valid_lft forever preferred_lft forever

7: br-f404d84e8300: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default

link/ether 02:42:6c:4a:8d:25 brd ff:ff:ff:ff:ff:ff

inet 172.18.0.1/16 brd 172.18.255.255 scope global br-f404d84e8300

   valid_lft forever preferred_lft forever

inet6 fc00:f853:ccd:e793::1/64 scope global

   valid_lft forever preferred_lft forever

inet6 fe80::42:6cff:fe4a:8d25/64 scope link

   valid_lft forever preferred_lft forever

inet6 fe80::1/64 scope link

   valid_lft forever preferred_lft forever

41: @.***: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f404d84e8300 state UP group default

link/ether 56:ae:70:ee:14:96 brd ff:ff:ff:ff:ff:ff link-netnsid 1

inet6 fe80::54ae:70ff:feee:1496/64 scope link

   valid_lft forever preferred_lft forever

kind network :

$ docker network inspect kind

[

{

    "Name": "kind",

    "Id": "f404d84e8300692028d610e584d671ca25164134404f8c560388bd32ba3c3ead",

    "Created": "2021-04-12T11:33:18.928978809+08:00",

    "Scope": "local",

    "Driver": "bridge",

    "EnableIPv6": true,

    "IPAM": {

        "Driver": "default",

        "Options": {},

        "Config": [

            {

                "Subnet": "172.18.0.0/16",

                "Gateway": "172.18.0.1"

            },

            {

                "Subnet": "fc00:f853:ccd:e793::/64"

            }

        ]

    },

    "Internal": false,

    "Attachable": false,

    "Ingress": false,

    "ConfigFrom": {

        "Network": ""

    },

    "ConfigOnly": false,

    "Containers": {

        "53ff5c26a99dce31c51edfdaa7caa4b7b0e7c26ecc7dd50dd22fb066dd72dea3": {

            "Name": "kind-control-plane",

            "EndpointID": "11071d95d28c937d6a4580336a8e02dfe6c169e9ef53b38c7871a1c99dda6de7",

            "MacAddress": "02:42:ac:12:00:02",

            "IPv4Address": "172.18.0.2/16",

            "IPv6Address": "fc00:f853:ccd:e793::2/64"

        }

    },

    "Options": {

        "com.docker.network.bridge.enable_ip_masquerade": "true"

    },

    "Labels": {}

}

]

kind container network:

@.***:~# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

   valid_lft forever preferred_lft forever

inet6 ::1/128 scope host

   valid_lft forever preferred_lft forever

2: @.***: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default

link/ether 2e:fb:fc:26:c6:71 brd ff:ff:ff:ff:ff:ff link-netns cni-2af05b3b-818d-fc48-ba05-e62fb3fa1605

inet 10.244.0.1/32 scope global veth31e04254

   valid_lft forever preferred_lft forever

3: @.***: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default

link/ether 4e:c8:99:16:61:79 brd ff:ff:ff:ff:ff:ff link-netns cni-6be11e62-2a85-1f5f-4f64-d9d966d0dcef

inet 10.244.0.1/32 scope global vethe905b52a

   valid_lft forever preferred_lft forever

4: @.***: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default

link/ether ee:7d:50:ed:73:21 brd ff:ff:ff:ff:ff:ff link-netns cni-9a85ed22-467f-769a-956e-c511863ca9af

inet 10.244.0.1/32 scope global veth515adf48

   valid_lft forever preferred_lft forever

40: @.***: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default

link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0

inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0

   valid_lft forever preferred_lft forever

inet6 fc00:f853:ccd:e793::2/64 scope global nodad

   valid_lft forever preferred_lft forever

inet6 fe80::42:acff:fe12:2/64 scope link

   valid_lft forever preferred_lft forever

dmesg and docker logs kind-control-plane no error log

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/7060#issuecomment-822946262, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWQKQEIYGB5SRIUUZUTTJTYHFANCNFSM43FVYXUQ .

longwuyuan commented 3 years ago

The error message content includes "unable to setup network" and "unable to setup DNAT". There is no indication as to why or what could be related.

That requires info gathering. While its ok to assume that there is no problem with the network since other containers are running. However it does not harm you in any way to attempt that docker run I suggested for the purpose of information gathering. Can you provide that info exactly as I suggested + 1 ;

kubectl get all,nodes -A -o wide
dmesg -c
docker ps
ifconfig -a
docker run -d --name test0 -p 80:80 -p 443:443 nginx:alpine
dmesg
curl localhost

Onbviously it is to ruleout issues with hostport 80 and 443 binding to container outside kinD

c0nstantien commented 3 years ago

The error message content includes "unable to setup network" and "unable to setup DNAT". There is no indication as to why or what could be related.

That requires info gathering. While its ok to assume that there is no problem with the network since other containers are running. However it does not harm you in any way to attempt that docker run I suggested for the purpose of information gathering. Can you provide that info exactly as I suggested + 1 ;

kubectl get all,nodes -A -o wide
dmesg -c
docker ps
ifconfig -a
docker run -d --name test0 -p 80:80 -p 443:443 nginx:alpine
dmesg
curl localhost

Onbviously it is to ruleout issues with hostport 80 and 443 binding to container outside kinD

This tutorial guides me to bind additional ports 80 and 443 when creating a kind container:

extraPortMappings allow the local host to make requests to the Ingress controller over ports 80/443 node-labels only allow the ingress controller to run on a specific node(s) matching the label selector

cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
EOF

host network:

$ sudo ss -lntp
State     Recv-Q    Send-Q       Local Address:Port        Peer Address:Port   Process
LISTEN    0         4096             127.0.0.1:45935            0.0.0.0:*       users:(("docker-proxy",pid=2856,fd=4))
LISTEN    0         4096               0.0.0.0:80               0.0.0.0:*       users:(("docker-proxy",pid=2885,fd=4))
LISTEN    0         32                 0.0.0.0:53               0.0.0.0:*       users:(("dnsmasq",pid=286,fd=5))
LISTEN    0         128                0.0.0.0:22               0.0.0.0:*       users:(("sshd",pid=278,fd=3))
LISTEN    0         4096               0.0.0.0:443              0.0.0.0:*       users:(("docker-proxy",pid=2870,fd=4))
LISTEN    0         4096               0.0.0.0:1086             0.0.0.0:*       users:(("docker-proxy",pid=505,fd=4))
LISTEN    0         32                    [::]:53                  [::]:*       users:(("dnsmasq",pid=286,fd=7))
LISTEN    0         128                   [::]:22                  [::]:*       users:(("sshd",pid=278,fd=4))
LISTEN    0         4096                  [::]:1086                [::]:*       users:(("docker-proxy",pid=515,fd=4))
c0nstantien commented 3 years ago

@longwuyuan thank you very much for your help 😀 I replaced the mirror address of ingress-nginx-controller in the deployment yaml file. After re-applying, ingress-nginx-controller can run normally. ImagePullPolicy: IfNotPresent is used in the source deployment file. My environment cannot access k8s.gcr.io, so I think it may be caused by the use of a local image