squat / kilo

Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes (k8s + wg = kg)
https://kilo.squat.ai
Apache License 2.0
2.01k stars 120 forks source link

Network connectivity issue on RKE #128

Open 3rmack opened 3 years ago

3rmack commented 3 years ago

Faced network connectivity issue between subnets on k8s nodes - k8s resources are reachable only within single node.

Initial setup: 2 node RKE k8s cluster. Nodes are placed in different availability zones, have only dedicated external IP addresses (no private networks attached, etc.).

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.5 LTS
Release:    18.04
Codename:   bionic
$ docker version
Client: Docker Engine - Community
 Version:           19.03.15
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        99e3ed8919
 Built:             Sat Jan 30 03:16:51 2021
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.15
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       99e3ed8919
  Built:            Sat Jan 30 03:15:20 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.3
  GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

This is how freshly installed k8s cluster looks like (without CNI plugins):

$ kubectl get node -o wide
NAME             STATUS     ROLES                      AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
159.100.247.37   NotReady   controlplane,etcd,worker   11m   v1.19.6   159.100.247.37   <none>        Ubuntu 18.04.5 LTS   4.15.0-136-generic   docker://19.3.15
89.145.166.73    NotReady   controlplane,etcd,worker   11m   v1.19.6   89.145.166.73    <none>        Ubuntu 18.04.5 LTS   4.15.0-136-generic   docker://19.3.15

As there is no kilo manifest for RKE, I used kilo-k3s.yaml (it was also mentioned in one open issue here). After applying nodes become ready, coredns pods are up, node network interfaces are created, routes configured. Here are some outputs: node1:

# wg
interface: kilo0
  public key: +SlpE7J0sq61ZJCbArlpYRS6BGYzy6qD0x+jZ618W08=
  private key: (hidden)
  listening port: 51820

peer: xX3yRb7vnH9h2mJ6PqCFUNWnosUjIzeR8KawQlxgym4=
  endpoint: 159.100.247.37:51820
  allowed ips: 10.42.0.0/24, 10.4.0.1/32
  latest handshake: 14 seconds ago
  transfer: 124 B received, 5.02 KiB sent
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 06:ed:94:00:08:e8 brd ff:ff:ff:ff:ff:ff
    inet 89.145.166.73/23 brd 89.145.167.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::4ed:94ff:fe00:8e8/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:47:b4:4b:13 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:47ff:feb4:4b13/64 scope link
       valid_lft forever preferred_lft forever
36: kilo0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none
    inet 10.4.0.2/16 brd 10.4.255.255 scope global kilo0
       valid_lft forever preferred_lft forever
37: kube-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether 2a:ae:38:3f:fa:1c brd ff:ff:ff:ff:ff:ff
    inet 10.42.1.1/24 scope global kube-bridge
       valid_lft forever preferred_lft forever
    inet6 fe80::9cf6:93ff:feac:c751/64 scope link
       valid_lft forever preferred_lft forever
40: vethd4c82b9f@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master kube-bridge state UP group default
    link/ether 32:73:51:9c:c8:13 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::3073:51ff:fe9c:c813/64 scope link
       valid_lft forever preferred_lft forever
41: vethb3e207e9@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master kube-bridge state UP group default
    link/ether 92:72:19:b0:ab:35 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::9072:19ff:feb0:ab35/64 scope link
       valid_lft forever preferred_lft forever
42: vethc0e85852@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master kube-bridge state UP group default
    link/ether 2a:ae:38:3f:fa:1c brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::28ae:38ff:fe3f:fa1c/64 scope link
       valid_lft forever preferred_lft forever
# ip r
default via 89.145.166.1 dev eth0 proto dhcp src 89.145.166.73 metric 100
10.4.0.0/16 dev kilo0 proto kernel scope link src 10.4.0.2
10.42.0.0/24 via 10.4.0.1 dev kilo0 proto static onlink
10.42.1.0/24 dev kube-bridge proto kernel scope link src 10.42.1.1
89.145.166.0/23 dev eth0 proto kernel scope link src 89.145.166.73
89.145.166.1 dev eth0 proto dhcp scope link src 89.145.166.73 metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
# cat /etc/cni/net.d/10-kilo.conflist
{"cniVersion":"0.3.1","name":"kilo","plugins":[{"bridge":"kube-bridge","forceAddress":true,"ipam":{"ranges":[[{"subnet":"10.42.1.0/24"}]],"type":"host-local"},"isDefaultGateway":true,"mtu":1450,"name":"kubernetes","type":"bridge"},{"capabilities":{"portMappings":true},"snat":true,"type":"portmap"}]}

node2:

# wg
interface: kilo0
  public key: xX3yRb7vnH9h2mJ6PqCFUNWnosUjIzeR8KawQlxgym4=
  private key: (hidden)
  listening port: 51820

peer: +SlpE7J0sq61ZJCbArlpYRS6BGYzy6qD0x+jZ618W08=
  endpoint: 89.145.166.73:51820
  allowed ips: 10.42.1.0/24, 10.4.0.2/32
  latest handshake: 16 seconds ago
  transfer: 5.49 KiB received, 124 B sent
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 06:85:8c:00:08:76 brd ff:ff:ff:ff:ff:ff
    inet 159.100.247.37/23 brd 159.100.247.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::485:8cff:fe00:876/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:ef:9a:47:ec brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:efff:fe9a:47ec/64 scope link
       valid_lft forever preferred_lft forever
36: kilo0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none
    inet 10.4.0.1/16 brd 10.4.255.255 scope global kilo0
       valid_lft forever preferred_lft forever
37: kube-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether 76:ce:99:2d:2c:04 brd ff:ff:ff:ff:ff:ff
    inet 10.42.0.1/24 scope global kube-bridge
       valid_lft forever preferred_lft forever
    inet6 fe80::40e5:22ff:fe4d:355c/64 scope link
       valid_lft forever preferred_lft forever
41: veth1e6e4c52@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master kube-bridge state UP group default
    link/ether 76:ce:99:2d:2c:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::74ce:99ff:fe2d:2c04/64 scope link
       valid_lft forever preferred_lft forever
42: veth6674dd51@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master kube-bridge state UP group default
    link/ether ea:92:0a:4d:de:eb brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::e892:aff:fe4d:deeb/64 scope link
       valid_lft forever preferred_lft forever
# ip r
default via 159.100.246.1 dev eth0 proto dhcp src 159.100.247.37 metric 100
10.4.0.0/16 dev kilo0 proto kernel scope link src 10.4.0.1
10.42.0.0/24 dev kube-bridge proto kernel scope link src 10.42.0.1
10.42.1.0/24 via 10.4.0.2 dev kilo0 proto static onlink
159.100.246.0/23 dev eth0 proto kernel scope link src 159.100.247.37
159.100.246.1 dev eth0 proto dhcp scope link src 159.100.247.37 metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
# cat /etc/cni/net.d/10-kilo.conflist
{"cniVersion":"0.3.1","name":"kilo","plugins":[{"bridge":"kube-bridge","forceAddress":true,"ipam":{"ranges":[[{"subnet":"10.42.0.0/24"}]],"type":"host-local"},"isDefaultGateway":true,"mtu":1450,"name":"kubernetes","type":"bridge"},{"capabilities":{"portMappings":true},"snat":true,"type":"portmap"}]}

Pods:

$ kubectl get po -A -o wide
NAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE   IP               NODE             NOMINATED NODE   READINESS GATES
ingress-nginx   default-http-backend-65dd5949d9-gmdfn     1/1     Running     0          43m   10.42.1.4        89.145.166.73    <none>           <none>
ingress-nginx   nginx-ingress-controller-89d5q            1/1     Running     0          43m   159.100.247.37   159.100.247.37   <none>           <none>
ingress-nginx   nginx-ingress-controller-zphwr            1/1     Running     0          43m   89.145.166.73    89.145.166.73    <none>           <none>
kube-system     coredns-6f85d5fb88-sr9nw                  1/1     Running     0          43m   10.42.0.4        159.100.247.37   <none>           <none>
kube-system     coredns-6f85d5fb88-zw2kg                  1/1     Running     0          20m   10.42.1.5        89.145.166.73    <none>           <none>
kube-system     coredns-autoscaler-79599b9dc6-7nhc7       1/1     Running     0          43m   10.42.1.3        89.145.166.73    <none>           <none>
kube-system     kilo-5wqr5                                1/1     Running     0          20m   89.145.166.73    89.145.166.73    <none>           <none>
kube-system     kilo-lp67n                                1/1     Running     0          20m   159.100.247.37   159.100.247.37   <none>           <none>
kube-system     metrics-server-8449844bf-c4knp            1/1     Running     0          43m   10.42.0.3        159.100.247.37   <none>           <none>
kube-system     rke-coredns-addon-deploy-job-vlt5l        0/1     Completed   0          43m   89.145.166.73    89.145.166.73    <none>           <none>
kube-system     rke-ingress-controller-deploy-job-vffln   0/1     Completed   0          43m   89.145.166.73    89.145.166.73    <none>           <none>
kube-system     rke-metrics-addon-deploy-job-xz57c        0/1     Completed   0          43m   89.145.166.73    89.145.166.73    <none>           <none>

kilo logs:

$ kubectl -n kube-system logs kilo-5wqr5
{"caller":"mesh.go:96","component":"kilo","level":"warn","msg":"no private key found on disk; generating one now","ts":"2021-03-03T08:29:06.92932142Z"}
{"caller":"main.go:221","msg":"Starting Kilo network mesh '2b959f7020a8dbb6b32860965ed4dbfd0dd11215'.","ts":"2021-03-03T08:29:06.941907053Z"}
{"caller":"cni.go:60","component":"kilo","err":"failed to read IPAM config from CNI config list file: invalid CIDR address: usePodCidr","level":"warn","msg":"failed to get CIDR from CNI file; overwriting it","ts":"2021-03-03T08:29:07.042344866Z"}
{"caller":"cni.go:68","component":"kilo","level":"info","msg":"CIDR in CNI file is empty","ts":"2021-03-03T08:29:07.042379524Z"}
{"CIDR":"10.42.1.0/24","caller":"cni.go:73","component":"kilo","level":"info","msg":"setting CIDR in CNI file","ts":"2021-03-03T08:29:07.042393076Z"}
E0303 08:29:07.103555       1 reflector.go:126] pkg/k8s/backend.go:407: Failed to list *v1alpha1.Peer: the server could not find the requested resource (get peers.kilo.squat.ai)
{"caller":"mesh.go:532","component":"kilo","level":"info","msg":"WireGuard configurations are different","ts":"2021-03-03T08:29:08.587825232Z"}
{"caller":"mesh.go:301","component":"kilo","event":"update","level":"info","node":{"Endpoint":{"DNS":"","IP":"159.100.247.37","Port":51820},"Key":"eFgzeVJiN3ZuSDloMm1KNlBxQ0ZVTldub3NVakl6ZVI4S2F3UWx4Z3ltND0=","InternalIP":null,"LastSeen":1614760147,"Leader":false,"Location":"","Name":"159.100.247.37","PersistentKeepalive":0,"Subnet":{"IP":"10.42.0.0","Mask":"////AA=="},"WireGuardIP":null},"ts":"2021-03-03T08:29:08.616413782Z"}
{"caller":"mesh.go:532","component":"kilo","level":"info","msg":"WireGuard configurations are different","ts":"2021-03-03T08:29:08.678697039Z"}
{"caller":"mesh.go:301","component":"kilo","event":"update","level":"info","node":{"Endpoint":{"DNS":"","IP":"159.100.247.37","Port":51820},"Key":"eFgzeVJiN3ZuSDloMm1KNlBxQ0ZVTldub3NVakl6ZVI4S2F3UWx4Z3ltND0=","InternalIP":null,"LastSeen":1614760147,"Leader":false,"Location":"","Name":"159.100.247.37","PersistentKeepalive":0,"Subnet":{"IP":"10.42.0.0","Mask":"////AA=="},"WireGuardIP":{"IP":"10.4.0.1","Mask":"//8AAA=="}},"ts":"2021-03-03T08:29:08.696034165Z"}
$ kubectl -n kube-system logs kilo-lp67n
{"caller":"mesh.go:96","component":"kilo","level":"warn","msg":"no private key found on disk; generating one now","ts":"2021-03-03T08:29:07.312678183Z"}
{"caller":"main.go:221","msg":"Starting Kilo network mesh '2b959f7020a8dbb6b32860965ed4dbfd0dd11215'.","ts":"2021-03-03T08:29:07.330771329Z"}
{"caller":"cni.go:60","component":"kilo","err":"failed to read IPAM config from CNI config list file: invalid CIDR address: usePodCidr","level":"warn","msg":"failed to get CIDR from CNI file; overwriting it","ts":"2021-03-03T08:29:07.431390019Z"}
{"caller":"cni.go:68","component":"kilo","level":"info","msg":"CIDR in CNI file is empty","ts":"2021-03-03T08:29:07.431433212Z"}
{"CIDR":"10.42.0.0/24","caller":"cni.go:73","component":"kilo","level":"info","msg":"setting CIDR in CNI file","ts":"2021-03-03T08:29:07.431446132Z"}
{"caller":"mesh.go:532","component":"kilo","level":"info","msg":"WireGuard configurations are different","ts":"2021-03-03T08:29:07.763066546Z"}
{"caller":"mesh.go:301","component":"kilo","event":"update","level":"info","node":{"Endpoint":{"DNS":"","IP":"89.145.166.73","Port":51820},"Key":"K1NscEU3SjBzcTYxWkpDYkFybHBZUlM2QkdZenk2cUQweCtqWjYxOFcwOD0=","InternalIP":null,"LastSeen":1614760148,"Leader":false,"Location":"","Name":"89.145.166.73","PersistentKeepalive":0,"Subnet":{"IP":"10.42.1.0","Mask":"////AA=="},"WireGuardIP":null},"ts":"2021-03-03T08:29:08.324360744Z"}
{"caller":"mesh.go:532","component":"kilo","level":"info","msg":"WireGuard configurations are different","ts":"2021-03-03T08:29:08.428973858Z"}
{"caller":"mesh.go:301","component":"kilo","event":"update","level":"info","node":{"Endpoint":{"DNS":"","IP":"89.145.166.73","Port":51820},"Key":"K1NscEU3SjBzcTYxWkpDYkFybHBZUlM2QkdZenk2cUQweCtqWjYxOFcwOD0=","InternalIP":null,"LastSeen":1614760148,"Leader":false,"Location":"","Name":"89.145.166.73","PersistentKeepalive":0,"Subnet":{"IP":"10.42.1.0","Mask":"////AA=="},"WireGuardIP":{"IP":"10.4.0.2","Mask":"//8AAA=="}},"ts":"2021-03-03T08:29:08.752187833Z"}

When I try to ping different pods from another pod or from the node itself - only pods which are hosted on the same node are reachable. Trying to ping coredns from node2:

root@node2:# ping 10.42.1.5
PING 10.42.1.5 (10.42.1.5) 56(84) bytes of data.
^C
--- 10.42.1.5 ping statistics ---
59 packets transmitted, 0 received, 100% packet loss, time 59371ms
root@node2:# # ping 10.42.0.4
PING 10.42.0.4 (10.42.0.4) 56(84) bytes of data.
64 bytes from 10.42.0.4: icmp_seq=1 ttl=64 time=0.041 ms
64 bytes from 10.42.0.4: icmp_seq=2 ttl=64 time=0.054 ms
64 bytes from 10.42.0.4: icmp_seq=3 ttl=64 time=0.039 ms
64 bytes from 10.42.0.4: icmp_seq=4 ttl=64 time=0.055 ms
64 bytes from 10.42.0.4: icmp_seq=5 ttl=64 time=0.051 ms
64 bytes from 10.42.0.4: icmp_seq=6 ttl=64 time=0.118 ms
64 bytes from 10.42.0.4: icmp_seq=7 ttl=64 time=0.086 ms
^C
--- 10.42.0.4 ping statistics ---
7 packets transmitted, 7 received, 0% packet loss, time 6123ms
rtt min/avg/max/mdev = 0.039/0.063/0.118/0.027 ms
squat commented 3 years ago

Thanks for the detailed report @3rmack. It occurs to me that there might be an issue with the default policy of the FORWARD chain on the filter table in iptables for your nodes. Can you please share the output of iptables-save?

3rmack commented 3 years ago

node1:

# iptables-save
# Generated by iptables-save v1.6.1 on Thu Mar  4 07:24:07 2021
*mangle
:PREROUTING ACCEPT [295400:215490847]
:INPUT ACCEPT [293641:215410982]
:FORWARD ACCEPT [1759:79865]
:OUTPUT ACCEPT [292301:63812538]
:POSTROUTING ACCEPT [292296:63812238]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Thu Mar  4 07:24:07 2021
# Generated by iptables-save v1.6.1 on Thu Mar  4 07:24:07 2021
*filter
:INPUT ACCEPT [184600:35300734]
:FORWARD DROP [1752:79400]
:OUTPUT ACCEPT [188068:38373222]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Thu Mar  4 07:24:07 2021
# Generated by iptables-save v1.6.1 on Thu Mar  4 07:24:07 2021
*nat
:PREROUTING ACCEPT [2069:97392]
:INPUT ACCEPT [317:17992]
:OUTPUT ACCEPT [826:49560]
:POSTROUTING ACCEPT [826:49560]
:DOCKER - [0:0]
:KILO-NAT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SEP-2AHCZLASALRZCLS7 - [0:0]
:KUBE-SEP-3YLRQZ6UJQXS6T5Q - [0:0]
:KUBE-SEP-6MIOBXLIHUQOTRKD - [0:0]
:KUBE-SEP-7AFK7QBGDHWZ2GEU - [0:0]
:KUBE-SEP-GQFJJFHKHZPIAODM - [0:0]
:KUBE-SEP-HOT2XMKPNFS7SQ2N - [0:0]
:KUBE-SEP-N53K6BQTZIOADP5D - [0:0]
:KUBE-SEP-PCJJFSWHCYBSYBAN - [0:0]
:KUBE-SEP-PJFTJTGBHOJ7TXPC - [0:0]
:KUBE-SEP-ZROOOGDTFBYPOKDJ - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-JTFAIQOSQRKTQWS3 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-QMWWTXBG7KFJQKLO - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 10.42.1.0/24 -m comment --comment "Kilo: jump to KILO-NAT chain" -j KILO-NAT
-A DOCKER -i docker0 -j RETURN
-A KILO-NAT -d 10.4.0.1/32 -m comment --comment "Kilo: do not NAT packets destined for WireGuared IPs" -j RETURN
-A KILO-NAT -d 10.42.0.0/24 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -d 10.4.0.1/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -d 10.4.0.2/32 -m comment --comment "Kilo: do not NAT packets destined for WireGuared IPs" -j RETURN
-A KILO-NAT -d 10.42.1.0/24 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -d 10.4.0.2/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -m comment --comment "Kilo: NAT remaining packets" -j MASQUERADE
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE
-A KUBE-SEP-2AHCZLASALRZCLS7 -s 10.42.0.4/32 -m comment --comment "kube-system/metrics-server" -j KUBE-MARK-MASQ
-A KUBE-SEP-2AHCZLASALRZCLS7 -p tcp -m comment --comment "kube-system/metrics-server" -m tcp -j DNAT --to-destination 10.42.0.4:443
-A KUBE-SEP-3YLRQZ6UJQXS6T5Q -s 10.42.0.3/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-3YLRQZ6UJQXS6T5Q -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.42.0.3:9153
-A KUBE-SEP-6MIOBXLIHUQOTRKD -s 10.42.1.2/32 -m comment --comment "ingress-nginx/default-http-backend" -j KUBE-MARK-MASQ
-A KUBE-SEP-6MIOBXLIHUQOTRKD -p tcp -m comment --comment "ingress-nginx/default-http-backend" -m tcp -j DNAT --to-destination 10.42.1.2:8080
-A KUBE-SEP-7AFK7QBGDHWZ2GEU -s 10.42.1.3/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-7AFK7QBGDHWZ2GEU -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.42.1.3:9153
-A KUBE-SEP-GQFJJFHKHZPIAODM -s 10.42.0.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-GQFJJFHKHZPIAODM -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.42.0.3:53
-A KUBE-SEP-HOT2XMKPNFS7SQ2N -s 10.42.0.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-HOT2XMKPNFS7SQ2N -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.42.0.3:53
-A KUBE-SEP-N53K6BQTZIOADP5D -s 159.100.253.226/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-N53K6BQTZIOADP5D -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 159.100.253.226:6443
-A KUBE-SEP-PCJJFSWHCYBSYBAN -s 10.42.1.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-PCJJFSWHCYBSYBAN -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.42.1.3:53
-A KUBE-SEP-PJFTJTGBHOJ7TXPC -s 185.19.28.253/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-PJFTJTGBHOJ7TXPC -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 185.19.28.253:6443
-A KUBE-SEP-ZROOOGDTFBYPOKDJ -s 10.42.1.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZROOOGDTFBYPOKDJ -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.42.1.3:53
-A KUBE-SERVICES ! -s 10.42.0.0/16 -d 10.43.185.28/32 -p tcp -m comment --comment "kube-system/metrics-server cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.43.185.28/32 -p tcp -m comment --comment "kube-system/metrics-server cluster IP" -m tcp --dport 443 -j KUBE-SVC-QMWWTXBG7KFJQKLO
-A KUBE-SERVICES ! -s 10.42.0.0/16 -d 10.43.250.85/32 -p tcp -m comment --comment "ingress-nginx/default-http-backend cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.43.250.85/32 -p tcp -m comment --comment "ingress-nginx/default-http-backend cluster IP" -m tcp --dport 80 -j KUBE-SVC-JTFAIQOSQRKTQWS3
-A KUBE-SERVICES ! -s 10.42.0.0/16 -d 10.43.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.43.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-GQFJJFHKHZPIAODM
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-PCJJFSWHCYBSYBAN
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-3YLRQZ6UJQXS6T5Q
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-SEP-7AFK7QBGDHWZ2GEU
-A KUBE-SVC-JTFAIQOSQRKTQWS3 -m comment --comment "ingress-nginx/default-http-backend" -j KUBE-SEP-6MIOBXLIHUQOTRKD
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-N53K6BQTZIOADP5D
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-PJFTJTGBHOJ7TXPC
-A KUBE-SVC-QMWWTXBG7KFJQKLO -m comment --comment "kube-system/metrics-server" -j KUBE-SEP-2AHCZLASALRZCLS7
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-HOT2XMKPNFS7SQ2N
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-ZROOOGDTFBYPOKDJ
COMMIT
# Completed on Thu Mar  4 07:24:07 2021

node2:

# iptables-save
# Generated by iptables-save v1.6.1 on Thu Mar  4 07:27:11 2021
*mangle
:PREROUTING ACCEPT [245963:224655027]
:INPUT ACCEPT [240261:224344732]
:FORWARD ACCEPT [5702:310295]
:OUTPUT ACCEPT [236572:42704625]
:POSTROUTING ACCEPT [236567:42704325]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Thu Mar  4 07:27:11 2021
# Generated by iptables-save v1.6.1 on Thu Mar  4 07:27:11 2021
*filter
:INPUT ACCEPT [162605:29726290]
:FORWARD DROP [5616:305215]
:OUTPUT ACCEPT [163527:28386986]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Thu Mar  4 07:27:11 2021
# Generated by iptables-save v1.6.1 on Thu Mar  4 07:27:11 2021
*nat
:PREROUTING ACCEPT [5969:326387]
:INPUT ACCEPT [353:21172]
:OUTPUT ACCEPT [924:55536]
:POSTROUTING ACCEPT [924:55536]
:DOCKER - [0:0]
:KILO-NAT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SEP-2AHCZLASALRZCLS7 - [0:0]
:KUBE-SEP-3YLRQZ6UJQXS6T5Q - [0:0]
:KUBE-SEP-6MIOBXLIHUQOTRKD - [0:0]
:KUBE-SEP-7AFK7QBGDHWZ2GEU - [0:0]
:KUBE-SEP-GQFJJFHKHZPIAODM - [0:0]
:KUBE-SEP-HOT2XMKPNFS7SQ2N - [0:0]
:KUBE-SEP-N53K6BQTZIOADP5D - [0:0]
:KUBE-SEP-PCJJFSWHCYBSYBAN - [0:0]
:KUBE-SEP-PJFTJTGBHOJ7TXPC - [0:0]
:KUBE-SEP-ZROOOGDTFBYPOKDJ - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-JTFAIQOSQRKTQWS3 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-QMWWTXBG7KFJQKLO - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 10.42.0.0/24 -m comment --comment "Kilo: jump to KILO-NAT chain" -j KILO-NAT
-A DOCKER -i docker0 -j RETURN
-A KILO-NAT -d 10.4.0.1/32 -m comment --comment "Kilo: do not NAT packets destined for WireGuared IPs" -j RETURN
-A KILO-NAT -d 10.42.0.0/24 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -d 10.4.0.1/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -d 10.4.0.2/32 -m comment --comment "Kilo: do not NAT packets destined for WireGuared IPs" -j RETURN
-A KILO-NAT -d 10.42.1.0/24 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -d 10.4.0.2/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -m comment --comment "Kilo: NAT remaining packets" -j MASQUERADE
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE
-A KUBE-SEP-2AHCZLASALRZCLS7 -s 10.42.0.4/32 -m comment --comment "kube-system/metrics-server" -j KUBE-MARK-MASQ
-A KUBE-SEP-2AHCZLASALRZCLS7 -p tcp -m comment --comment "kube-system/metrics-server" -m tcp -j DNAT --to-destination 10.42.0.4:443
-A KUBE-SEP-3YLRQZ6UJQXS6T5Q -s 10.42.0.3/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-3YLRQZ6UJQXS6T5Q -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.42.0.3:9153
-A KUBE-SEP-6MIOBXLIHUQOTRKD -s 10.42.1.2/32 -m comment --comment "ingress-nginx/default-http-backend" -j KUBE-MARK-MASQ
-A KUBE-SEP-6MIOBXLIHUQOTRKD -p tcp -m comment --comment "ingress-nginx/default-http-backend" -m tcp -j DNAT --to-destination 10.42.1.2:8080
-A KUBE-SEP-7AFK7QBGDHWZ2GEU -s 10.42.1.3/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-7AFK7QBGDHWZ2GEU -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.42.1.3:9153
-A KUBE-SEP-GQFJJFHKHZPIAODM -s 10.42.0.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-GQFJJFHKHZPIAODM -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.42.0.3:53
-A KUBE-SEP-HOT2XMKPNFS7SQ2N -s 10.42.0.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-HOT2XMKPNFS7SQ2N -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.42.0.3:53
-A KUBE-SEP-N53K6BQTZIOADP5D -s 159.100.253.226/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-N53K6BQTZIOADP5D -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 159.100.253.226:6443
-A KUBE-SEP-PCJJFSWHCYBSYBAN -s 10.42.1.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-PCJJFSWHCYBSYBAN -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.42.1.3:53
-A KUBE-SEP-PJFTJTGBHOJ7TXPC -s 185.19.28.253/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-PJFTJTGBHOJ7TXPC -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 185.19.28.253:6443
-A KUBE-SEP-ZROOOGDTFBYPOKDJ -s 10.42.1.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZROOOGDTFBYPOKDJ -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.42.1.3:53
-A KUBE-SERVICES ! -s 10.42.0.0/16 -d 10.43.250.85/32 -p tcp -m comment --comment "ingress-nginx/default-http-backend cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.43.250.85/32 -p tcp -m comment --comment "ingress-nginx/default-http-backend cluster IP" -m tcp --dport 80 -j KUBE-SVC-JTFAIQOSQRKTQWS3
-A KUBE-SERVICES ! -s 10.42.0.0/16 -d 10.43.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.43.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 10.42.0.0/16 -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES ! -s 10.42.0.0/16 -d 10.43.185.28/32 -p tcp -m comment --comment "kube-system/metrics-server cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.43.185.28/32 -p tcp -m comment --comment "kube-system/metrics-server cluster IP" -m tcp --dport 443 -j KUBE-SVC-QMWWTXBG7KFJQKLO
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-GQFJJFHKHZPIAODM
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-PCJJFSWHCYBSYBAN
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-3YLRQZ6UJQXS6T5Q
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-SEP-7AFK7QBGDHWZ2GEU
-A KUBE-SVC-JTFAIQOSQRKTQWS3 -m comment --comment "ingress-nginx/default-http-backend" -j KUBE-SEP-6MIOBXLIHUQOTRKD
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-N53K6BQTZIOADP5D
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-PJFTJTGBHOJ7TXPC
-A KUBE-SVC-QMWWTXBG7KFJQKLO -m comment --comment "kube-system/metrics-server" -j KUBE-SEP-2AHCZLASALRZCLS7
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-HOT2XMKPNFS7SQ2N
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-ZROOOGDTFBYPOKDJ
COMMIT
# Completed on Thu Mar  4 07:27:11 2021
squat commented 3 years ago

@3rmack were you able to uncover the source of this issue? From what I can see, the iptables rules look totally fine. To begin testing, we could check:

  1. can you ping the Kilo interface from another node? I.e. ping 10.4.0.1 and ping 10.4.0.2
  2. if this works, then we would want ping the coredns pod on node2 from node1 and tcpdump the kilo0 interface on node2; do we see and ICMP packets coming from that interface?