weaveworks / weave

Simple, resilient multi-host containers networking and more.
https://www.weave.works
Apache License 2.0
6.62k stars 670 forks source link

Kubernetes weave network not working #3906

Open naeem4github opened 3 years ago

naeem4github commented 3 years ago

What you expected to happen?

What happened?

I setup a cluster for my personal study but I am unable to communicate with the POD or Service

How to reproduce it?

root@ip-172-31-21-95:~# systemctl daemon-reload

root@ip-172-31-21-95:~# systemctl start kubelet

root@ip-172-31-21-95:~# systemctl enable kubelet.service

root@ip-172-31-21-95:~# sudo su -

root@ip-172-31-21-95:~# kubeadm init

root@ip-172-31-21-95:~# mkdir -p $HOME/.kube

root@ip-172-31-21-95:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

root@ip-172-31-21-95:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config

root@ip-172-31-21-95:~# kubectl get pods -o wide -n kube-system NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-558bd4d5db-s482z 0/1 Pending 0 53s coredns-558bd4d5db-xbhwf 0/1 Pending 0 53s etcd-ip-172-31-21-95 1/1 Running 0 57s 172.31.21.95 ip-172-31-21-95 kube-apiserver-ip-172-31-21-95 1/1 Running 0 57s 172.31.21.95 ip-172-31-21-95 kube-controller-manager-ip-172-31-21-95 1/1 Running 0 58s 172.31.21.95 ip-172-31-21-95 kube-proxy-zqc4x 1/1 Running 0 53s 172.31.21.95 ip-172-31-21-95 kube-scheduler-ip-172-31-21-95 1/1 Running 0 57s 172.31.21.95 ip-172-31-21-95

root@ip-172-31-21-95:~# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

NAME STATUS ROLES AGE VERSION ip-172-31-21-95 Ready control-plane,master 2m16s v1.21.2

root@ip-172-31-21-95:~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-558bd4d5db-s482z 1/1 Running 0 2m27s kube-system coredns-558bd4d5db-xbhwf 1/1 Running 0 2m27s kube-system etcd-ip-172-31-21-95 1/1 Running 0 2m31s kube-system kube-apiserver-ip-172-31-21-95 1/1 Running 0 2m31s kube-system kube-controller-manager-ip-172-31-21-95 1/1 Running 0 2m32s kube-system kube-proxy-zqc4x 1/1 Running 0 2m27s kube-system kube-scheduler-ip-172-31-21-95 1/1 Running 0 2m31s kube-system weave-net-smk7l 2/2 Running 1 33s

oot@ip-172-31-21-95:~# kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-31-21-95 Ready control-plane,master 4m5s v1.21.2 ip-172-31-32-11 Ready 27s v1.21.2 ip-172-31-34-101 Ready 49s v1.21.2

root@ip-172-31-21-95:~# kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-31-21-95 Ready control-plane,master 6m7s v1.21.2 ip-172-31-32-11 Ready worker 2m29s v1.21.2 ip-172-31-34-101 Ready worker 2m51s v1.21.2

root@ip-172-31-21-95:~# kubectl get all -n kube-system NAME READY STATUS RESTARTS AGE pod/coredns-558bd4d5db-s482z 1/1 Running 0 7m53s pod/coredns-558bd4d5db-xbhwf 1/1 Running 0 7m53s pod/etcd-ip-172-31-21-95 1/1 Running 0 7m57s pod/kube-apiserver-ip-172-31-21-95 1/1 Running 0 7m57s pod/kube-controller-manager-ip-172-31-21-95 1/1 Running 0 7m58s pod/kube-proxy-qdb49 1/1 Running 0 4m23s pod/kube-proxy-zqc4x 1/1 Running 0 7m53s pod/kube-proxy-zrcmj 1/1 Running 0 4m45s pod/kube-scheduler-ip-172-31-21-95 1/1 Running 0 7m57s pod/weave-net-b44zv 2/2 Running 0 4m23s pod/weave-net-smk7l 2/2 Running 1 5m59s pod/weave-net-w942h 2/2 Running 0 4m45s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 7m58s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 7m58s daemonset.apps/weave-net 3 3 3 3 3 5m59s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/coredns 2/2 2 2 7m58s NAME DESIRED CURRENT READY AGE replicaset.apps/coredns-558bd4d5db 2 2 2 7m53s

root@ip-172-31-21-95:~# kubectl cluster-info Kubernetes control plane is running at https://172.31.21.95:6443 CoreDNS is running at https://172.31.21.95:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. ubuntu@ip-172-31-21-95:~$ cat javawebapppod.yml apiVersion: v1 kind: Pod metadata: name: javawebapppod labels: app: javawebapp spec: containers:

root@ip-172-31-21-95:~# kubectl apply -f javawebapppod.yml pod/javawebapppod created

oot@ip-172-31-21-95:~# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES javawebapppod 1/1 Running 0 2m30s 10.36.0.1 ip-172-31-32-11

root@ip-172-31-21-95:~# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS javawebapppod 1/1 Running 0 4m12s app=javawebapp

root@ip-172-31-21-95:~# cat > javawebappsvc.yml apiVersion: v1 kind: Service metadata: name: javawebappsvc spec: type: ClusterIP selector: app: javawebapplication ports:

root@ip-172-31-21-95:~# kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 443/TCP 15m

root@ip-172-31-21-95:~# kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR javawebappsvc ClusterIP 10.111.240.61 80/TCP 7m40s app=javawebapp kubernetes ClusterIP 10.96.0.1 443/TCP 24m

root@ip-172-31-21-95:~# kubectl describe svc javawebappsvc Name: javawebappsvc Namespace: default Labels: Annotations: Selector: app=javawebapp Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.111.240.61 IPs: 10.111.240.61 Port: 80/TCP TargetPort: 8080/TCP Endpoints: 10.36.0.1:8080 Session Affinity: None Events:

root@ip-172-31-21-95:~# curl 10.111.240.61 curl: (7) Failed to connect to 10.111.240.61 port 80: No route to host

Anything else we need to know?

Versions:

$ weave version
ubuntu@ip-172-31-29-51:~$ weave version

Command 'weave' not found, but can be installed with:

sudo apt install texlive-binaries

$ docker version
ubuntu@ip-172-31-29-51:~$ docker version 
Client:
 Version:           20.10.2
 API version:       1.41
 Go version:        go1.13.8
 Git commit:        20.10.2-0ubuntu1~20.04.2
 Built:             Tue Mar 30 21:24:57 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.24/version: dial unix /var/run/docker.sock: connect: permission denied

$ uname -a
ubuntu@ip-172-31-29-51:~$ uname -a 
Linux ip-172-31-29-51 5.4.0-1045-aws #47-Ubuntu SMP Tue Apr 13 07:02:25 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
$ kubectl version

Logs:

$ docker logs weave

or, if using Kubernetes:

$ kubectl logs -n kube-system <weave-net-pod> weave

Network:


$ ip route
default via 172.31.16.1 dev eth0 proto dhcp src 172.31.29.51 metric 100 
10.32.0.0/12 dev weave proto kernel scope link src 10.32.0.1 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
172.31.16.0/20 dev eth0 proto kernel scope link src 172.31.29.51 
172.31.16.1 dev eth0 proto dhcp scope link src 172.31.29.51 metric 100 

$ ip -4 -o addr
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
2: eth0    inet 172.31.29.51/20 brd 172.31.31.255 scope global dynamic eth0\       valid_lft 1948sec preferred_lft 1948sec
3: docker0    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0\       valid_lft forever preferred_lft forever
6: weave    inet 10.32.0.1/12 brd 10.47.255.255 scope global weave\       valid_lft forever preferred_lft forever

$ sudo iptables-save
# Generated by iptables-save v1.8.4 on Thu Jun 24 18:28:08 2021
*mangle
:PREROUTING ACCEPT [2334898:395479203]
:INPUT ACCEPT [2334893:395478629]
:FORWARD ACCEPT [4:534]
:OUTPUT ACCEPT [2419444:502145743]
:POSTROUTING ACCEPT [2419428:502144481]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:WEAVE-CANARY - [0:0]
COMMIT
# Completed on Thu Jun 24 18:28:08 2021
# Generated by iptables-save v1.8.4 on Thu Jun 24 18:28:08 2021
*filter
:INPUT ACCEPT [539242:100817118]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [564443:127293539]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
:WEAVE-CANARY - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-EGRESS - [0:0]
:WEAVE-NPC-EGRESS-ACCEPT - [0:0]
:WEAVE-NPC-EGRESS-CUSTOM - [0:0]
:WEAVE-NPC-EGRESS-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
-A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A INPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 6784 -m addrtype ! --src-type LOCAL -m conntrack ! --ctstate RELATED,ESTABLISHED -m comment --comment "Block non-local access to Weave Net control port" -j DROP
-A INPUT -i weave -j WEAVE-NPC-EGRESS
-A FORWARD -i weave -m comment --comment "NOTE: this must go before \'-j KUBE-FORWARD\'" -j WEAVE-NPC-EGRESS
-A FORWARD -o weave -m comment --comment "NOTE: this must go before \'-j KUBE-FORWARD\'" -j WEAVE-NPC
-A FORWARD -o weave -m state --state NEW -j NFLOG --nflog-group 86
-A FORWARD -o weave -j DROP
-A FORWARD -i weave ! -o weave -j ACCEPT
-A FORWARD -o weave -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-SERVICES -d 10.110.110.157/32 -p tcp -m comment --comment "default/javawebappsvc has no endpoints" -m tcp --dport 80 -j REJECT --reject-with icmp-port-unreachable
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m physdev --physdev-out vethwe-bridge --physdev-is-bridged -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC-DEFAULT -m set --match-set weave-Rzff}h:=]JaaJl/G;(XJpGjZ[ dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-public" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-]B*(W?)t*z5O17G044[gUo#$l dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-node-lease" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-;rGqyMIl1HN^cfDki~Z$3]6!N dst -m comment --comment "DefaultAllow ingress isolation for namespace: default" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-P.B|!ZhkAr5q=XZ?3}tMBA+0 dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-system" -j ACCEPT
-A WEAVE-NPC-EGRESS -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC-EGRESS -m physdev --physdev-in vethwe-bridge --physdev-is-bridged -j RETURN
-A WEAVE-NPC-EGRESS -m addrtype --dst-type LOCAL -j RETURN
-A WEAVE-NPC-EGRESS -d 224.0.0.0/4 -j RETURN
-A WEAVE-NPC-EGRESS -m state --state NEW -j WEAVE-NPC-EGRESS-DEFAULT
-A WEAVE-NPC-EGRESS -m state --state NEW -m mark ! --mark 0x40000/0x40000 -j WEAVE-NPC-EGRESS-CUSTOM
-A WEAVE-NPC-EGRESS-ACCEPT -j MARK --set-xmark 0x40000/0x40000
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src -m comment --comment "DefaultAllow egress isolation for namespace: kube-public" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src -m comment --comment "DefaultAllow egress isolation for namespace: kube-public" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-sui%__gZ}{kX~oZgI_Ttqp=Dp src -m comment --comment "DefaultAllow egress isolation for namespace: kube-node-lease" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-sui%__gZ}{kX~oZgI_Ttqp=Dp src -m comment --comment "DefaultAllow egress isolation for namespace: kube-node-lease" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src -m comment --comment "DefaultAllow egress isolation for namespace: kube-system" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src -m comment --comment "DefaultAllow egress isolation for namespace: kube-system" -j RETURN
COMMIT
# Completed on Thu Jun 24 18:28:08 2021
# Generated by iptables-save v1.8.4 on Thu Jun 24 18:28:08 2021
*nat
:PREROUTING ACCEPT [436:20492]
:INPUT ACCEPT [436:20492]
:OUTPUT ACCEPT [7706:492575]
:POSTROUTING ACCEPT [7706:492575]
:DOCKER - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SEP-3DU66DE6VORVEQVD - [0:0]
:KUBE-SEP-CNOO4EWCPTS3SPU3 - [0:0]
:KUBE-SEP-IH4MYBT4OZHCB5RZ - [0:0]
:KUBE-SEP-S4MK5EVI7CLHCCS6 - [0:0]
:KUBE-SEP-SWLOBIBPXYBP7G2Z - [0:0]
:KUBE-SEP-SZZ7MOWKTWUFXIJT - [0:0]
:KUBE-SEP-UJJNLSZU6HL4F5UO - [0:0]
:KUBE-SEP-ZCHNBYOGFZRFKYMA - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-PXFROJKNMMELKHNS - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:WEAVE - [0:0]
:WEAVE-CANARY - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -j WEAVE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SEP-3DU66DE6VORVEQVD -s 10.32.0.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-3DU66DE6VORVEQVD -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.32.0.3:53
-A KUBE-SEP-CNOO4EWCPTS3SPU3 -s 172.31.29.51/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-CNOO4EWCPTS3SPU3 -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 172.31.29.51:6443
-A KUBE-SEP-IH4MYBT4OZHCB5RZ -s 10.36.0.1/32 -m comment --comment "default/javasvc" -j KUBE-MARK-MASQ
-A KUBE-SEP-IH4MYBT4OZHCB5RZ -p tcp -m comment --comment "default/javasvc" -m tcp -j DNAT --to-destination 10.36.0.1:8080
-A KUBE-SEP-S4MK5EVI7CLHCCS6 -s 10.32.0.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-S4MK5EVI7CLHCCS6 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.32.0.3:53
-A KUBE-SEP-SWLOBIBPXYBP7G2Z -s 10.32.0.2/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-SWLOBIBPXYBP7G2Z -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.32.0.2:9153
-A KUBE-SEP-SZZ7MOWKTWUFXIJT -s 10.32.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-SZZ7MOWKTWUFXIJT -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.32.0.2:53
-A KUBE-SEP-UJJNLSZU6HL4F5UO -s 10.32.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-UJJNLSZU6HL4F5UO -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.32.0.2:53
-A KUBE-SEP-ZCHNBYOGFZRFKYMA -s 10.32.0.3/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZCHNBYOGFZRFKYMA -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.32.0.3:9153
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.98.236.100/32 -p tcp -m comment --comment "default/javasvc cluster IP" -m tcp --dport 80 -j KUBE-SVC-PXFROJKNMMELKHNS
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-UJJNLSZU6HL4F5UO
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-S4MK5EVI7CLHCCS6
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SWLOBIBPXYBP7G2Z
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-SEP-ZCHNBYOGFZRFKYMA
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-CNOO4EWCPTS3SPU3
-A KUBE-SVC-PXFROJKNMMELKHNS -m comment --comment "default/javasvc" -j KUBE-SEP-IH4MYBT4OZHCB5RZ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SZZ7MOWKTWUFXIJT
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-3DU66DE6VORVEQVD
-A WEAVE -m set --match-set weaver-no-masq-local dst -m comment --comment "Prevent SNAT to locally running containers" -j RETURN
-A WEAVE -s 10.32.0.0/12 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 10.32.0.0/12 -d 10.32.0.0/12 -j MASQUERADE
-A WEAVE -s 10.32.0.0/12 ! -d 10.32.0.0/12 -j MASQUERADE
COMMIT
# Completed on Thu Jun 24 18:28:08 2021
cccsss01 commented 2 years ago

I know this works for centos 8 but, disable firewalld, flush ip tables, set /etc/docker/daemon.json to { "exec-opts": ["native.cgroupdriver=systemd"] }

you need to remove weave net prior to installing the weave CNI add-in