kubernetes / kubernetes

Production-Grade Container Scheduling and Management
https://kubernetes.io
Apache License 2.0
109.68k stars 39.27k forks source link

kube-proxy not creating iptable rules #61005

Closed hoeghh closed 4 years ago

hoeghh commented 6 years ago

Is this a BUG REPORT or FEATURE REQUEST?: /kind bug

What happened: Installed kubernetes and kube-proxy starts fine. It reports to journalctl that it is creating rules, but iptables -L doesnt show them, and services does not work. pod-pod network works fine. dns is set up correct in resolve.conf for pods, but doesnt work, as services doesnt work.

What you expected to happen: kube-proxy creates the rules for services

How to reproduce it (as minimally and precisely as possible): start the cluster

Anything else we need to know?:

iptables --version
iptables v1.6.1

Environment:

[Service] ExecStart=/usr/local/bin/kube-proxy \ --cluster-cidr=10.32.0.0/12 \ --kubeconfig=/var/lib/kube-proxy/kubeconfig \ --proxy-mode=iptables \ --v=4 Restart=on-failure RestartSec=5

[Install] WantedBy=multi-user.target

cat /etc/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=docker.service Requires=docker.service

[Service] ExecStart=/usr/local/bin/kubelet \ --allow-privileged=true \ --anonymous-auth=false \ --authorization-mode=Webhook \ --client-ca-file=/var/lib/kubernetes/ca.pem \ --cloud-provider= \ --cluster-dns=10.32.0.10 \ --cluster-domain=cluster.local \ --image-pull-progress-deadline=2m \ --kubeconfig=/var/lib/kubelet/kubeconfig \ --pod-cidr=10.200.1.0/24 \ --register-node=true \ --network-plugin=cni \ --cni-conf-dir=/etc/cni/net.d \ --runtime-request-timeout=15m \ --tls-cert-file=/var/lib/kubelet/k8s-worker-1.pem \ --tls-private-key-file=/var/lib/kubelet/k8s-worker-1-key.pem \ --v=2 Restart=on-failure RestartSec=5

[Install] WantedBy=multi-user.target

Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.665467 19094 config.go:141] Calling handler.OnEndpointsUpdate Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.691625 19094 shared_informer.go:122] caches populated Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.691647 19094 controller_utils.go:1026] Caches are synced for endpoints config controller Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.691653 19094 config.go:110] Calling handler.OnEndpointsSynced() Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.691724 19094 proxier.go:984] Not syncing iptables until Services and Endpoints have been received from master Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.691734 19094 proxier.go:980] syncProxyRules took 25.208µs Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.691775 19094 shared_informer.go:122] caches populated Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.691781 19094 controller_utils.go:1026] Caches are synced for service config controller Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.691786 19094 config.go:210] Calling handler.OnServiceSynced() Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.691821 19094 proxier.go:329] Adding new service port "default/kubernetes:https" at 10.32.0.1:443/TCP Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.691843 19094 proxier.go:329] Adding new service port "kube-system/kube-dns:dns" at 10.32.0.10:53/UDP Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.691851 19094 proxier.go:329] Adding new service port "kube-system/kube-dns:dns-tcp" at 10.32.0.10:53/TCP Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.691859 19094 proxier.go:329] Adding new service port "default/nw-service:" at 10.32.0.43:80/TCP Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.691885 19094 proxier.go:1000] Stale udp service kube-system/kube-dns:dns -> 10.32.0.10 Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.691891 19094 proxier.go:1005] Syncing iptables rules Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.691899 19094 iptables.go:419] running iptables -N [KUBE-SERVICES -t filter] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.693239 19094 iptables.go:419] running iptables -N [KUBE-SERVICES -t nat] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.694308 19094 iptables.go:419] running iptables -C [INPUT -t filter -m comment --comment kubernetes service port Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.695496 19094 iptables.go:419] running iptables -C [OUTPUT -t filter -m comment --comment kubernetes service por Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.696788 19094 iptables.go:419] running iptables -C [OUTPUT -t nat -m comment --comment kubernetes service portal Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.698428 19094 iptables.go:419] running iptables -I [OUTPUT -t nat -m comment --comment kubernetes service portal Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.700269 19094 iptables.go:419] running iptables -C [PREROUTING -t nat -m comment --comment kubernetes service po Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.703112 19094 iptables.go:419] running iptables -I [PREROUTING -t nat -m comment --comment kubernetes service po Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.705565 19094 iptables.go:419] running iptables -N [KUBE-POSTROUTING -t nat] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.707251 19094 iptables.go:419] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouti Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.708994 19094 iptables.go:419] running iptables -I [POSTROUTING -t nat -m comment --comment kubernetes postrouti Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.711493 19094 iptables.go:419] running iptables -N [KUBE-FORWARD -t filter] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.714077 19094 iptables.go:419] running iptables -C [FORWARD -t filter -m comment --comment kubernetes forward ru Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.717317 19094 iptables.go:321] running iptables-save [-t filter] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.719184 19094 iptables.go:321] running iptables-save [-t nat] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.721459 19094 proxier.go:1664] Restoring iptables rules: filter Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: :KUBE-SERVICES - [0:0] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: :KUBE-FORWARD - [0:0] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x00004000/0x00004000 -j ACCEPT Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-FORWARD -s 10.32.0.0/12 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RE Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod destination rule" -d 10.32.0.0/12 -m conntrack --ctsta Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: COMMIT Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: nat Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: :KUBE-SERVICES - [0:0] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: :KUBE-NODEPORTS - [0:0] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: :KUBE-POSTROUTING - [0:0] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: :KUBE-MARK-MASQ - [0:0] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: :KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: :KUBE-SEP-7VWSNUIAZBJTUYEY - [0:0] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: :KUBE-SEP-F2K4YTDEILQGH7AU - [0:0] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: :KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: :KUBE-SEP-SZZ7MOWKTWUFXIJT - [0:0] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: :KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: :KUBE-SEP-UJJNLSZU6HL4F5UO - [0:0] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: :KUBE-SVC-CAFVZHNSQBQYBWG7 - [0:0] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: :KUBE-SEP-I4JOGY4Z6NDUV3WO - [0:0] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: :KUBE-SEP-ORXPT4GAK7NUVVBU - [0:0] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x00004000/0x00004000 -j MAS Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-MARK-MASQ -j MARK --set-xmark 0x00004000/0x00004000 Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SERVICES -m comment --comment "default/kubernetes:https cluster IP" -m tcp -p tcp -d 10.32.0.1/32 --dport 443 ! -s 10.32 Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SERVICES -m comment --comment "default/kubernetes:https cluster IP" -m tcp -p tcp -d 10.32.0.1/32 --dport 443 -j KUBE-SV Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-7VWSNUIAZBJTUYEY --rcheck - Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-F2K4YTDEILQGH7AU --rcheck - Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -m statistic --mode random --probability 0.50000 -j K Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SEP-7VWSNUIAZBJTUYEY -m comment --comment default/kubernetes:https -s 192.168.50.21/32 -j KUBE-MARK-MASQ Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SEP-7VWSNUIAZBJTUYEY -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-7VWSNUIAZBJTUYEY --set -m t Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -j KUBE-SEP-F2K4YTDEILQGH7AU Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SEP-F2K4YTDEILQGH7AU -m comment --comment default/kubernetes:https -s 192.168.50.22/32 -j KUBE-MARK-MASQ Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SEP-F2K4YTDEILQGH7AU -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-F2K4YTDEILQGH7AU --set -m t Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SERVICES -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp -p udp -d 10.32.0.10/32 --dport 53 ! -s 10.32 Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SERVICES -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp -p udp -d 10.32.0.10/32 --dport 53 -j KUBE-SV Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment kube-system/kube-dns:dns -j KUBE-SEP-SZZ7MOWKTWUFXIJT Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SEP-SZZ7MOWKTWUFXIJT -m comment --comment kube-system/kube-dns:dns -s 10.32.0.2/32 -j KUBE-MARK-MASQ Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SEP-SZZ7MOWKTWUFXIJT -m comment --comment kube-system/kube-dns:dns -m udp -p udp -j DNAT --to-destination 10.32.0.2:53 Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SERVICES -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp -p tcp -d 10.32.0.10/32 --dport 53 ! -s 1 Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SERVICES -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp -p tcp -d 10.32.0.10/32 --dport 53 -j KUB Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment kube-system/kube-dns:dns-tcp -j KUBE-SEP-UJJNLSZU6HL4F5UO Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SEP-UJJNLSZU6HL4F5UO -m comment --comment kube-system/kube-dns:dns-tcp -s 10.32.0.2/32 -j KUBE-MARK-MASQ Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SEP-UJJNLSZU6HL4F5UO -m comment --comment kube-system/kube-dns:dns-tcp -m tcp -p tcp -j DNAT --to-destination 10.32.0.2: Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SERVICES -m comment --comment "default/nw-service: cluster IP" -m tcp -p tcp -d 10.32.0.43/32 --dport 80 ! -s 10.32.0.0/ Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SERVICES -m comment --comment "default/nw-service: cluster IP" -m tcp -p tcp -d 10.32.0.43/32 --dport 80 -j KUBE-SVC-CAF Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SVC-CAFVZHNSQBQYBWG7 -m comment --comment default/nw-service: -m statistic --mode random --probability 0.50000 -j KUBE-S Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SEP-I4JOGY4Z6NDUV3WO -m comment --comment default/nw-service: -s 10.32.0.3/32 -j KUBE-MARK-MASQ Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SEP-I4JOGY4Z6NDUV3WO -m comment --comment default/nw-service: -m tcp -p tcp -j DNAT --to-destination 10.32.0.3:80 Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SVC-CAFVZHNSQBQYBWG7 -m comment --comment default/nw-service: -j KUBE-SEP-ORXPT4GAK7NUVVBU Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SEP-ORXPT4GAK7NUVVBU -m comment --comment default/nw-service: -s 10.32.0.4/32 -j KUBE-MARK-MASQ Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SEP-ORXPT4GAK7NUVVBU -m comment --comment default/nw-service: -m tcp -p tcp -j DNAT --to-destination 10.32.0.4:80 Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: -A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: COMMIT Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.721537 19094 iptables.go:381] running iptables-restore [--noflush --counters] Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.724895 19094 healthcheck.go:235] Not saving endpoints for unknown healthcheck "kube-system/kube-dns" Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.724908 19094 healthcheck.go:235] Not saving endpoints for unknown healthcheck "default/nw-service" Mar 10 16:55:09 k8s-worker-1 kube-proxy[19094]: I0310 16:55:09.726454 19094 proxier.go:980] syncProxyRules took 34.639587ms Mar 10 16:55:10 k8s-worker-1 kube-proxy[19094]: I0310 16:55:10.545475 19094 config.go:141] Calling handler.OnEndpointsUpdate Mar 10 16:55:11 k8s-worker-1 kube-proxy[19094]: I0310 16:55:11.691187 19094 config.go:141] Calling handler.OnEndpointsUpdate

iptables -l iptables v1.6.1: unknown option "-l" Try `iptables -h' or 'iptables --help' for more information. [root@k8s-worker-1 vagrant]# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere / kubernetes service portals / KUBE-FIREWALL all -- anywhere anywhere

Chain FORWARD (policy DROP) target prot opt source destination
WEAVE-NPC all -- anywhere anywhere / NOTE: this must go before '-j KUBE-FORWARD' / NFLOG all -- anywhere anywhere state NEW nflog-group 86 DROP all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED KUBE-FORWARD all -- anywhere anywhere / kubernetes forward rules / DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere

Chain OUTPUT (policy ACCEPT) target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere / kubernetes service portals / KUBE-FIREWALL all -- anywhere anywhere

Chain DOCKER (1 references) target prot opt source destination

Chain DOCKER-ISOLATION (1 references) target prot opt source destination
RETURN all -- anywhere anywhere

Chain DOCKER-USER (1 references) target prot opt source destination
RETURN all -- anywhere anywhere

Chain KUBE-FIREWALL (2 references) target prot opt source destination
DROP all -- anywhere anywhere / kubernetes firewall for dropping marked packets / mark match 0x8000/0x8000

Chain KUBE-FORWARD (1 references) target prot opt source destination
ACCEPT all -- anywhere anywhere / kubernetes forwarding rules / mark match 0x4000/0x4000 ACCEPT all -- 10.32.0.0/12 anywhere / kubernetes forwarding conntrack pod source rule / ctstate RELATED,ESTABLISHED ACCEPT all -- anywhere 10.32.0.0/12 / kubernetes forwarding conntrack pod destination rule / ctstate RELATED,ESTABLISHED

Chain KUBE-SERVICES (2 references) target prot opt source destination

Chain WEAVE-NPC (1 references) target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- anywhere base-address.mcast.net/4 WEAVE-NPC-DEFAULT all -- anywhere anywhere state NEW WEAVE-NPC-INGRESS all -- anywhere anywhere state NEW ACCEPT all -- anywhere anywhere ! match-set weave-local-pods dst

Chain WEAVE-NPC-DEFAULT (1 references) target prot opt source destination
ACCEPT all -- anywhere anywhere match-set weave-E.1.0W^NGSp]0_t5WwH/]gX@L dst / DefaultAllow isolation for namespace: default / ACCEPT all -- anywhere anywhere match-set weave-0EHD/vdN#O4]V?o4Tx7kS;APH dst / DefaultAllow isolation for namespace: kube-public / ACCEPT all -- anywhere anywhere match-set weave-?b%zl9GIe0AET1(QI^7NWefO dst / DefaultAllow isolation for namespace: kube-system / ACCEPT all -- anywhere anywhere match-set weave-lF)6Q3}|pmm8Nd:QNQ6Xr(0 dst / DefaultAllow isolation for namespace: traefik */

Chain WEAVE-NPC-INGRESS (1 references) target prot opt source destination


Strange thing is, that iptables-save does show my nw service

iptables-save

Generated by iptables-save v1.6.1 on Sat Mar 10 17:49:37 2018

*nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [3:180] :POSTROUTING ACCEPT [3:180] :DOCKER - [0:0] :KUBE-MARK-DROP - [0:0] :KUBE-MARK-MASQ - [0:0] :KUBE-NODEPORTS - [0:0] :KUBE-POSTROUTING - [0:0] :KUBE-SEP-7VWSNUIAZBJTUYEY - [0:0] :KUBE-SEP-F2K4YTDEILQGH7AU - [0:0] :KUBE-SEP-I4JOGY4Z6NDUV3WO - [0:0] :KUBE-SEP-ORXPT4GAK7NUVVBU - [0:0] :KUBE-SEP-SZZ7MOWKTWUFXIJT - [0:0] :KUBE-SEP-UJJNLSZU6HL4F5UO - [0:0] :KUBE-SERVICES - [0:0] :KUBE-SVC-CAFVZHNSQBQYBWG7 - [0:0] :KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0] :KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0] :KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0] :WEAVE - [0:0] -A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER -A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER -A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE -A POSTROUTING -j WEAVE -A DOCKER -i docker0 -j RETURN -A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000 -A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000 -A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE -A KUBE-SEP-7VWSNUIAZBJTUYEY -s 192.168.50.21/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ -A KUBE-SEP-7VWSNUIAZBJTUYEY -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-7VWSNUIAZBJTUYEY --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 192.168.50.21:6443 -A KUBE-SEP-F2K4YTDEILQGH7AU -s 192.168.50.22/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ -A KUBE-SEP-F2K4YTDEILQGH7AU -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-F2K4YTDEILQGH7AU --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 192.168.50.22:6443 -A KUBE-SEP-I4JOGY4Z6NDUV3WO -s 10.32.0.3/32 -m comment --comment "default/nw-service:" -j KUBE-MARK-MASQ -A KUBE-SEP-I4JOGY4Z6NDUV3WO -p tcp -m comment --comment "default/nw-service:" -m tcp -j DNAT --to-destination 10.32.0.3:80 -A KUBE-SEP-ORXPT4GAK7NUVVBU -s 10.32.0.4/32 -m comment --comment "default/nw-service:" -j KUBE-MARK-MASQ -A KUBE-SEP-ORXPT4GAK7NUVVBU -p tcp -m comment --comment "default/nw-service:" -m tcp -j DNAT --to-destination 10.32.0.4:80 -A KUBE-SEP-SZZ7MOWKTWUFXIJT -s 10.32.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ -A KUBE-SEP-SZZ7MOWKTWUFXIJT -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.32.0.2:53 -A KUBE-SEP-UJJNLSZU6HL4F5UO -s 10.32.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ -A KUBE-SEP-UJJNLSZU6HL4F5UO -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.32.0.2:53 -A KUBE-SERVICES -d 10.32.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y -A KUBE-SERVICES -d 10.32.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU -A KUBE-SERVICES -d 10.32.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4 -A KUBE-SERVICES -d 10.32.0.43/32 -p tcp -m comment --comment "default/nw-service: cluster IP" -m tcp --dport 80 -j KUBE-SVC-CAFVZHNSQBQYBWG7 -A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS -A KUBE-SVC-CAFVZHNSQBQYBWG7 -m comment --comment "default/nw-service:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-I4JOGY4Z6NDUV3WO -A KUBE-SVC-CAFVZHNSQBQYBWG7 -m comment --comment "default/nw-service:" -j KUBE-SEP-ORXPT4GAK7NUVVBU -A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-UJJNLSZU6HL4F5UO -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-7VWSNUIAZBJTUYEY --mask 255.255.255.255 --rsource -j KUBE-SEP-7VWSNUIAZBJTUYEY -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-F2K4YTDEILQGH7AU --mask 255.255.255.255 --rsource -j KUBE-SEP-F2K4YTDEILQGH7AU -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-7VWSNUIAZBJTUYEY -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-F2K4YTDEILQGH7AU -A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-SZZ7MOWKTWUFXIJT -A WEAVE -s 10.32.0.0/12 -d 224.0.0.0/4 -j RETURN -A WEAVE ! -s 10.32.0.0/12 -d 10.32.0.0/12 -j MASQUERADE -A WEAVE -s 10.32.0.0/12 ! -d 10.32.0.0/12 -j MASQUERADE COMMIT

Completed on Sat Mar 10 17:49:37 2018

Generated by iptables-save v1.6.1 on Sat Mar 10 17:49:37 2018

filter :INPUT ACCEPT [74:15236] :FORWARD DROP [0:0] :OUTPUT ACCEPT [60:5896] :DOCKER - [0:0] :DOCKER-ISOLATION - [0:0] :DOCKER-USER - [0:0] :KUBE-FIREWALL - [0:0] :KUBE-FORWARD - [0:0] :KUBE-SERVICES - [0:0] :WEAVE-NPC - [0:0] :WEAVE-NPC-DEFAULT - [0:0] :WEAVE-NPC-INGRESS - [0:0] -A INPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A INPUT -j KUBE-FIREWALL -A FORWARD -o weave -m comment --comment "NOTE: this must go before \'-j KUBE-FORWARD\'" -j WEAVE-NPC -A FORWARD -o weave -m state --state NEW -j NFLOG --nflog-group 86 -A FORWARD -o weave -j DROP -A FORWARD -i weave ! -o weave -j ACCEPT -A FORWARD -o weave -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -m comment --comment "kubernetes forward rules" -j KUBE-FORWARD -A FORWARD -j DOCKER-USER -A FORWARD -j DOCKER-ISOLATION -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -o docker0 -j DOCKER -A FORWARD -i docker0 ! -o docker0 -j ACCEPT -A FORWARD -i docker0 -o docker0 -j ACCEPT -A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A OUTPUT -j KUBE-FIREWALL -A DOCKER-ISOLATION -j RETURN -A DOCKER-USER -j RETURN -A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP -A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT -A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT -A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT -A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT -A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS -A WEAVE-NPC -m set ! --match-set weave-local-pods dst -j ACCEPT -A WEAVE-NPC-DEFAULT -m set --match-set weave-E.1.0W^NGSp]0_t5WwH/]gX@L dst -m comment --comment "DefaultAllow isolation for namespace: default" -j ACCEPT -A WEAVE-NPC-DEFAULT -m set --match-set weave-0EHD/vdN#O4]V?o4Tx7kS;APH dst -m comment --comment "DefaultAllow isolation for namespace: kube-public" -j ACCEPT -A WEAVE-NPC-DEFAULT -m set --match-set weave-?b%zl9GIe0AET1(QI^7NWefO dst -m comment --comment "DefaultAllow isolation for namespace: kube-system" -j ACCEPT -A WEAVE-NPC-DEFAULT -m set --match-set weave-lF)6Q3}|pmm8Nd:QNQ6Xr(0 dst -m comment --comment "DefaultAllow isolation for namespace: traefik" -j ACCEPT COMMIT

Completed on Sat Mar 10 17:49:37 2018

hoeghh commented 6 years ago

/sig network

hoeghh commented 6 years ago

Here's the result of my problem. From one pod, i can curl another pod, but not the service

kubectl get pods -o wide
NAME                      READY     STATUS    RESTARTS   AGE       IP          NODE
nwtool-6cd6f6795d-ss2x6   1/1       Running   0          3h        10.32.0.3   k8s-worker-1
nwtool-6cd6f6795d-t7jw7   1/1       Running   0          3h        10.32.0.4   k8s-worker-1

kubectl get services
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.32.0.1    <none>        443/TCP   3h
nw-service   ClusterIP   10.32.0.43   <none>        80/TCP    2h

kubectl exec -it nwtool-6cd6f6795d-t7jw7 bash
[root@nwtool-6cd6f6795d-t7jw7 /]# curl 10.32.0.4
<H1>Praqma Network MultiTool - nginx - It Works!</H1>
<p>Container Name: nwtool-6cd6f6795d-t7jw7
Container IP: 10.32.0.4 <BR></p>

[root@nwtool-6cd6f6795d-t7jw7 /]# curl 10.32.0.3
<H1>Praqma Network MultiTool - nginx - It Works!</H1>
<p>Container Name: nwtool-6cd6f6795d-ss2x6
Container IP: 10.32.0.3 <BR></p>

[root@nwtool-6cd6f6795d-t7jw7 /]# curl 10.32.0.43
curl: (7) Failed connect to 10.32.0.43:80; No route to host
[root@nwtool-6cd6f6795d-t7jw7 /]# 
hoeghh commented 6 years ago

Found this in journalctl for kube-proxy

Mar 10 17:45:10 k8s-worker-1 kube-proxy[22884]: E0310 17:45:10.260208   22884 proxier.go:792] Failed to execute iptables-restore for nat: exit status 1 (iptables-restore: line 
Mar 10 17:45:10 k8s-worker-1 kube-proxy[22884]: )
Lion-Wei commented 6 years ago

You may need use iptables -t nat -nL to check kube-proxy created rules.

kumudt commented 6 years ago

The fix #60978 hasn't fixed the issue for iptables 1.6.0 (Kops debian stretch image). I still see the same failures even after upgrading the cluster to 1.9.7 which has this fix backported.

opsnull commented 6 years ago

same issus here。

curl pod_ip:port OK, but not ok where curl service_ip:port.

Failed to execute iptables-restore for nat: exit status 1 (iptables-restore: line 7

chen-joe1015 commented 6 years ago

the same problem +1

chen-joe1015 commented 6 years ago

Fixes #58956

Release note:

Fixed kube-proxy to work correctly with iptables 1.6.2 and later.

0verc1ocker commented 5 years ago

@kumudt

The fix #60978 hasn't fixed the issue for iptables 1.6.0 (Kops debian stretch image). I still see the same failures even after upgrading the cluster to 1.9.7 which has this fix backported.

Seeing the same problem with kops debian stretch image for 1.10 k8s. Public tag of image in AWS: kope.io/k8s-1.10-debian-stretch-amd64-hvm-ebs-2018-08-17

@chen-joe1015

Fixes #58956

Release note:

Fixed kube-proxy to work correctly with iptables 1.6.2 and later.

This seems to still be broken on kops Debian Stretch images where the default iptables version is 1.6.0. Should iptables be updated on the debian stretch default images?

diwakar-s-maurya commented 5 years ago

the same problem +1

majinghe commented 5 years ago

I met the same issue. kubectl version: v1.10.4 docker version: 17.03.2-ce issue details: Two pod on one node, each pod has its own svc, from one pod, telnet another pod ip, it is ok, while telnet another pod's svc, it fails.

How to debug or fix this issue?

0verc1ocker commented 5 years ago

In my experience, this might have been an issue with a misconfiguration of CNI. At the time I was migrating a k8s cluster from Calico CNI to Weave-net CNI and there might have been IP allocation and calico policy pods still running in the cluster.

shelmingsong commented 5 years ago

same issue

Aisuko commented 5 years ago

The same issue with you guys, I used kuberouter and I deleted the pod of kubeproxy. The cluster cannot communicate with each other, so, I have to deploy the kubeproxy to kubenetes, the kube-proxy pod was running status, but it could not create iptables policy.

po/kube-proxy-7fs6c                        1/1       Running            1          1h        10.116.18.75    node6     controller-revision-hash=2039678971,k8s-app=kube-proxy,pod-template-generation=1
po/kube-proxy-82kpk                        1/1       Running            1          1h        10.116.18.145   master1   controller-revision-hash=2039678971,k8s-app=kube-proxy,pod-template-generation=1
po/kube-proxy-c4sd4                        1/1       Running            1          1h        10.116.18.72    node4     controller-revision-hash=2039678971,k8s-app=kube-proxy,pod-template-generation=1
po/kube-proxy-f4ld4                        1/1       Running            1          1h        10.116.18.146   master2   controller-revision-hash=2039678971,k8s-app=kube-proxy,pod-template-generation=1
po/kube-proxy-fczkr                        1/1       Running            1          1h        10.116.18.147   master3   controller-revision-hash=2039678971,k8s-app=kube-proxy,pod-template-generation=1
po/kube-proxy-hcwpm                        1/1       Running            1          1h        10.116.18.74    node5     controller-revision-hash=2039678971,k8s-app=kube-proxy,pod-template-generation=1
po/kube-proxy-lbrbp                        1/1       Running            1          1h        10.116.18.148   node1     controller-revision-hash=2039678971,k8s-app=kube-proxy,pod-template-generation=1
po/kube-proxy-nht24                        1/1       Running            1          1h        10.116.18.71    node3     controller-revision-hash=2039678971,k8s-app=kube-proxy,pod-template-generation=1
[root@master1 ~]# iptables -t nat -nL
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
KUBE-POSTROUTING  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes postrouting rules */
MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0           

Chain DOCKER (2 references)
target     prot opt source               destination         
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           

Chain KUBE-MARK-DROP (0 references)
target     prot opt source               destination         
MARK       all  --  0.0.0.0/0            0.0.0.0/0            MARK or 0x8000

Chain KUBE-MARK-MASQ (0 references)
target     prot opt source               destination         
MARK       all  --  0.0.0.0/0            0.0.0.0/0            MARK or 0x4000

Chain KUBE-POSTROUTING (1 references)
target     prot opt source               destination         
MASQUERADE  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000
dcbw commented 5 years ago

If you are still seeing this, please grab:

1) pod logs for the kube-proxy pod or the kube-proxy process ("docker logs" for the container, or "journalctl -b _PID=" if not) 2) the output of "iptables-save"

zacekjakub commented 5 years ago

The same issue here...

kube-proxy logs:

I0510 15:47:02.718568       1 server_others.go:140] Using iptables Proxier.
I0510 15:47:02.761602       1 server_others.go:174] Tearing down inactive rules.
E0510 15:47:02.823194       1 proxier.go:540] Error removing iptables rules in ipvs proxier: error deleting chain "KUBE-MARK-MASQ": exit status 1: iptables: Too many links.
I0510 15:47:03.108512       1 server.go:448] Version: v1.11.3
I0510 15:47:03.124095       1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0510 15:47:03.124248       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0510 15:47:03.124306       1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0510 15:47:03.124400       1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0510 15:47:03.124561       1 config.go:102] Starting endpoints config controller
I0510 15:47:03.124576       1 controller_utils.go:1025] Waiting for caches to sync for endpoints config controller
I0510 15:47:03.124724       1 config.go:202] Starting service config controller
I0510 15:47:03.124737       1 controller_utils.go:1025] Waiting for caches to sync for service config controller
I0510 15:47:03.224789       1 controller_utils.go:1032] Caches are synced for endpoints config controller
I0510 15:47:03.224914       1 controller_utils.go:1032] Caches are synced for service config controller

Generated by iptables-save v1.6.1 on Mon May 20 14:52:16 2019

*raw
:PREROUTING ACCEPT [177651250:34430857309]
:OUTPUT ACCEPT [113471019:27074616924]
:cali-OUTPUT - [0:0]
:cali-PREROUTING - [0:0]
:cali-failsafe-in - [0:0]
:cali-failsafe-out - [0:0]
:cali-from-host-endpoint - [0:0]
:cali-to-host-endpoint - [0:0]
-A PREROUTING -m comment --comment "cali:6gwbT8clXdHdC1b1" -j cali-PREROUTING
-A OUTPUT -m comment --comment "cali:tVnHkvAo15HuiPy0" -j cali-OUTPUT
-A cali-OUTPUT -m comment --comment "cali:njdnLwYeGqBJyMxW" -j MARK --set-xmark 0x0/0xf0000
-A cali-OUTPUT -m comment --comment "cali:rz86uTUcEZAfFsh7" -j cali-to-host-endpoint
-A cali-OUTPUT -m comment --comment "cali:pN0F5zD0b8yf9W1Z" -m mark --mark 0x10000/0x10000 -j ACCEPT
-A cali-PREROUTING -m comment --comment "cali:XFX5xbM8B9qR10JG" -j MARK --set-xmark 0x0/0xf0000
-A cali-PREROUTING -i cali+ -m comment --comment "cali:EWMPb0zVROM-woQp" -j MARK --set-xmark 0x40000/0x40000
-A cali-PREROUTING -m comment --comment "cali:Ek_rsNpunyDlK3sH" -m mark --mark 0x0/0x40000 -j cali-from-host-endpoint
-A cali-PREROUTING -m comment --comment "cali:nM-DzTFPwQbQvtRj" -m mark --mark 0x10000/0x10000 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:wWFQM43tJU7wwnFZ" -m multiport --dports 22 -j ACCEPT -A cali-failsafe-in -p udp -m comment --comment "cali:LwNV--R8MjeUYacw" -m multiport --dports 68 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:QOO5NUOqOSS1_Iw0" -m multiport --dports 179 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:cwZWoBSwVeIAZmVN" -m multiport --dports 2379 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:7FbNXT91kugE_upR" -m multiport --dports 2380 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:ywE9WYUBEpve70WT" -m multiport --dports 6666 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:l-WQSVBf_lygPR0J" -m multiport --dports 6667 -j ACCEPT -A cali-failsafe-in -p udp -m comment --comment "cali:k9jPBsnz833bYNtN" -m multiport --sports 53 -j ACCEPT -A cali-failsafe-in -p udp -m comment --comment "cali:h6bDkHXiHjFdQFvi" -m multiport --sports 67 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:ZxyjJQRmKuKXDHob" -m multiport --sports 179 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:simwjHaxrPmaHOEO" -m multiport --sports 2379 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:hvk-Re2iN6cMDIO-" -m multiport --sports 2380 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:czejYL2nB2RLhrhj" -m multiport --sports 6666 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:Poam7ro8PATnz_3V" -m multiport --sports 6667 -j ACCEPT -A cali-failsafe-out -p udp -m comment --comment "cali:82hjfji-wChFhAqL" -m multiport --dports 53 -j ACCEPT -A cali-failsafe-out -p udp -m comment --comment "cali:TNM3RfEjbNr72hgH" -m multiport --dports 67 -j ACCEPT -A cali-failsafe-out -p tcp -m comment --comment "cali:ycxKitIl4u3dK0HR" -m multiport --dports 179 -j ACCEPT -A cali-failsafe-out -p tcp -m comment --comment "cali:hxjEWyxdkXXkdvut" -m multiport --dports 2379 -j ACCEPT -A cali-failsafe-out -p tcp -m comment --comment "cali:cA_GLtruuvG88KiO" -m multiport --dports 2380 -j ACCEPT -A cali-failsafe-out -p tcp -m comment --comment "cali:Sb1hkLYFMrKS6r01" -m multiport --dports 6666 -j ACCEPT -A cali-failsafe-out -p tcp -m comment --comment "cali:UwLSebGONJUG4yG-" -m multiport --dports 6667 -j ACCEPT -A cali-failsafe-out -p tcp -m comment --comment "cali:r23CvAiW0ROtMTyk" -m multiport --sports 22 -j ACCEPT -A cali-failsafe-out -p udp -m comment --comment "cali:D9jU-Lf4ZjKkTtdD" -m multiport --sports 68 -j ACCEPT -A cali-failsafe-out -p tcp -m comment --comment "cali:5zDpOHUwMrjzLzZl" -m multiport --sports 179 -j ACCEPT -A cali-failsafe-out -p tcp -m comment --comment "cali:Jq44rynzFYoWGr4q" -m multiport --sports 2379 -j ACCEPT -A cali-failsafe-out -p tcp -m comment --comment "cali:OiGBCpR5GP0HW_y6" -m multiport --sports 2380 -j ACCEPT -A cali-failsafe-out -p tcp -m comment --comment "cali:iwXWeITN771fTZ2N" -m multiport --sports 6666 -j ACCEPT -A cali-failsafe-out -p tcp -m comment --comment "cali:Ot9A94gzys2kTtDj" -m multiport --sports 6667 -j ACCEPT COMMIT

Completed on Mon May 20 14:52:16 2019

Generated by iptables-save v1.6.1 on Mon May 20 14:52:16 2019

*mangle :PREROUTING ACCEPT [790014:54689560] :INPUT ACCEPT [113243052:27409576019] :FORWARD ACCEPT [64408031:7021267471] :OUTPUT ACCEPT [113471038:27074623520] :POSTROUTING ACCEPT [177879060:34095890523] :cali-PREROUTING - [0:0] :cali-failsafe-in - [0:0] :cali-from-host-endpoint - [0:0] -A PREROUTING -m comment --comment "cali:6gwbT8clXdHdC1b1" -j cali-PREROUTING -A cali-PREROUTING -m comment --comment "cali:6BJqBjBC7crtA-7-" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A cali-PREROUTING -m comment --comment "cali:KX7AGNd6rMcDUai6" -m mark --mark 0x10000/0x10000 -j ACCEPT -A cali-PREROUTING -m comment --comment "cali:wNH7KsA3ILKJBsY9" -j cali-from-host-endpoint -A cali-PREROUTING -m comment --comment "cali:Cg96MgVuoPm7UMRo" -m comment --comment "Host endpoint policy accepted packet." -m mark --mark 0x10000/0x10000 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:wWFQM43tJU7wwnFZ" -m multiport --dports 22 -j ACCEPT -A cali-failsafe-in -p udp -m comment --comment "cali:LwNV--R8MjeUYacw" -m multiport --dports 68 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:QOO5NUOqOSS1_Iw0" -m multiport --dports 179 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:cwZWoBSwVeIAZmVN" -m multiport --dports 2379 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:7FbNXT91kugE_upR" -m multiport --dports 2380 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:ywE9WYUBEpve70WT" -m multiport --dports 6666 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:l-WQSVBf_lygPR0J" -m multiport --dports 6667 -j ACCEPT COMMIT

Completed on Mon May 20 14:52:16 2019

Generated by iptables-save v1.6.1 on Mon May 20 14:52:16 2019

*nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [4:240] :POSTROUTING ACCEPT [4:240] :DOCKER - [0:0] :KUBE-MARK-DROP - [0:0] :KUBE-MARK-MASQ - [0:0] :KUBE-NODEPORTS - [0:0] :KUBE-POSTROUTING - [0:0] :KUBE-SEP-63SZPG4CB4FJFT57 - [0:0] :KUBE-SEP-AA5KTTUGCZE2ODCP - [0:0] :KUBE-SEP-ELNRRCZ4DGAHBKIH - [0:0] :KUBE-SEP-GFK47UEBQ4KIFTUO - [0:0] :KUBE-SEP-KVE6UNUIZJZTRM6R - [0:0] :KUBE-SEP-NF7D7FVI4HVHFRDD - [0:0] :KUBE-SEP-P2MRGZHSR76DRP5G - [0:0] :KUBE-SEP-WRGDRKEF33SRKGKD - [0:0] :KUBE-SEP-XWE3UGWRWEFLEMNO - [0:0] :KUBE-SERVICES - [0:0] :KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0] :KUBE-SVC-JD5MR3NA4I4DYORP - [0:0] :KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0] :KUBE-SVC-O2NILAKD36YRUY3I - [0:0] :KUBE-SVC-SYQ6P3J57XR6MMCQ - [0:0] :KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0] :KUBE-SVC-TPZGODMZWK2K55MJ - [0:0] :KUBE-SVC-XGLOHA7QRQ3V22RZ - [0:0] :KUBE-SVC-ZJSZPVE7SAWNCJAV - [0:0] :cali-OUTPUT - [0:0] :cali-POSTROUTING - [0:0] :cali-PREROUTING - [0:0] :cali-fip-dnat - [0:0] :cali-fip-snat - [0:0] :cali-nat-outgoing - [0:0] -A PREROUTING -m comment --comment "cali:6gwbT8clXdHdC1b1" -j cali-PREROUTING -A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER -A OUTPUT -m comment --comment "cali:tVnHkvAo15HuiPy0" -j cali-OUTPUT -A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER -A POSTROUTING -m comment --comment "cali:O3lYWMrLQYEMJtB5" -j cali-POSTROUTING -A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE -A DOCKER -i docker0 -j RETURN -A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000 -A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000 -A KUBE-NODEPORTS -p udp -m comment --comment "office-test/uhura-service:uhura-service" -m udp --dport 32196 -j KUBE-MARK-MASQ -A KUBE-NODEPORTS -p udp -m comment --comment "office-test/uhura-service:uhura-service" -m udp --dport 32196 -j KUBE-SVC-ZJSZPVE7SAWNCJAV -A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE -A KUBE-SEP-63SZPG4CB4FJFT57 -s 10.233.89.131/32 -m comment --comment "kube-system/kubernetes-dashboard:" -j KUBE-MARK-MASQ -A KUBE-SEP-63SZPG4CB4FJFT57 -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp -j DNAT --to-destination 10.233.89.131:8443 -A KUBE-SEP-AA5KTTUGCZE2ODCP -s 10.233.89.166/32 -m comment --comment "office-test/kafka:kafka" -j KUBE-MARK-MASQ -A KUBE-SEP-AA5KTTUGCZE2ODCP -p tcp -m comment --comment "office-test/kafka:kafka" -m tcp -j DNAT --to-destination 10.233.89.166:9092 -A KUBE-SEP-ELNRRCZ4DGAHBKIH -s 192.168.12.52/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ -A KUBE-SEP-ELNRRCZ4DGAHBKIH -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 192.168.12.52:6443 -A KUBE-SEP-GFK47UEBQ4KIFTUO -s 10.233.89.129/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ -A KUBE-SEP-GFK47UEBQ4KIFTUO -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.233.89.129:10055 -A KUBE-SEP-KVE6UNUIZJZTRM6R -s 10.233.89.129/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ -A KUBE-SEP-KVE6UNUIZJZTRM6R -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.233.89.129:53 -A KUBE-SEP-NF7D7FVI4HVHFRDD -s 10.233.89.183/32 -m comment --comment "office-test/ssm:" -j KUBE-MARK-MASQ -A KUBE-SEP-NF7D7FVI4HVHFRDD -p tcp -m comment --comment "office-test/ssm:" -m tcp -j DNAT --to-destination 10.233.89.183:6000 -A KUBE-SEP-P2MRGZHSR76DRP5G -s 10.233.89.129/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ -A KUBE-SEP-P2MRGZHSR76DRP5G -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.233.89.129:53 -A KUBE-SEP-WRGDRKEF33SRKGKD -s 10.233.89.166/32 -m comment --comment "office-test/kafka:zookeeper" -j KUBE-MARK-MASQ -A KUBE-SEP-WRGDRKEF33SRKGKD -p tcp -m comment --comment "office-test/kafka:zookeeper" -m tcp -j DNAT --to-destination 10.233.89.166:2181 -A KUBE-SEP-XWE3UGWRWEFLEMNO -s 10.233.89.184/32 -m comment --comment "office-test/uhura-service:uhura-service" -j KUBE-MARK-MASQ -A KUBE-SEP-XWE3UGWRWEFLEMNO -p udp -m comment --comment "office-test/uhura-service:uhura-service" -m udp -j DNAT --to-destination 10.233.89.184:31001 -A KUBE-SERVICES ! -s 10.233.64.0/18 -d 10.233.18.25/32 -p udp -m comment --comment "office-test/uhura-service:uhura-service cluster IP" -m udp --dport 31001 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 10.233.18.25/32 -p udp -m comment --comment "office-test/uhura-service:uhura-service cluster IP" -m udp --dport 31001 -j KUBE-SVC-ZJSZPVE7SAWNCJAV -A KUBE-SERVICES ! -s 10.233.64.0/18 -d 10.233.52.123/32 -p tcp -m comment --comment "office-test/kafka:kafka cluster IP" -m tcp --dport 9092 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 10.233.52.123/32 -p tcp -m comment --comment "office-test/kafka:kafka cluster IP" -m tcp --dport 9092 -j KUBE-SVC-SYQ6P3J57XR6MMCQ -A KUBE-SERVICES ! -s 10.233.64.0/18 -d 10.233.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 10.233.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y -A KUBE-SERVICES ! -s 10.233.64.0/18 -d 10.233.0.3/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 10.233.0.3/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4 -A KUBE-SERVICES ! -s 10.233.64.0/18 -d 10.233.62.27/32 -p tcp -m comment --comment "kube-system/kubernetes-dashboard: cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 10.233.62.27/32 -p tcp -m comment --comment "kube-system/kubernetes-dashboard: cluster IP" -m tcp --dport 443 -j KUBE-SVC-XGLOHA7QRQ3V22RZ -A KUBE-SERVICES ! -s 10.233.64.0/18 -d 10.233.42.123/32 -p tcp -m comment --comment "office-test/ssm: cluster IP" -m tcp --dport 6000 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 10.233.42.123/32 -p tcp -m comment --comment "office-test/ssm: cluster IP" -m tcp --dport 6000 -j KUBE-SVC-TPZGODMZWK2K55MJ -A KUBE-SERVICES ! -s 10.233.64.0/18 -d 10.233.0.3/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 10.233.0.3/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU -A KUBE-SERVICES ! -s 10.233.64.0/18 -d 10.233.0.3/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 10055 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 10.233.0.3/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 10055 -j KUBE-SVC-JD5MR3NA4I4DYORP -A KUBE-SERVICES ! -s 10.233.64.0/18 -d 10.233.52.123/32 -p tcp -m comment --comment "office-test/kafka:zookeeper cluster IP" -m tcp --dport 2181 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 10.233.52.123/32 -p tcp -m comment --comment "office-test/kafka:zookeeper cluster IP" -m tcp --dport 2181 -j KUBE-SVC-O2NILAKD36YRUY3I -A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS -A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-P2MRGZHSR76DRP5G -A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-SEP-GFK47UEBQ4KIFTUO -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-ELNRRCZ4DGAHBKIH -A KUBE-SVC-O2NILAKD36YRUY3I -m comment --comment "office-test/kafka:zookeeper" -j KUBE-SEP-WRGDRKEF33SRKGKD -A KUBE-SVC-SYQ6P3J57XR6MMCQ -m comment --comment "office-test/kafka:kafka" -j KUBE-SEP-AA5KTTUGCZE2ODCP -A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-KVE6UNUIZJZTRM6R -A KUBE-SVC-TPZGODMZWK2K55MJ -m comment --comment "office-test/ssm:" -j KUBE-SEP-NF7D7FVI4HVHFRDD -A KUBE-SVC-XGLOHA7QRQ3V22RZ -m comment --comment "kube-system/kubernetes-dashboard:" -j KUBE-SEP-63SZPG4CB4FJFT57 -A KUBE-SVC-ZJSZPVE7SAWNCJAV -m comment --comment "office-test/uhura-service:uhura-service" -j KUBE-SEP-XWE3UGWRWEFLEMNO -A cali-OUTPUT -m comment --comment "cali:GBTAv2p5CwevEyJm" -j cali-fip-dnat -A cali-POSTROUTING -m comment --comment "cali:Z-c7XtVd2Bq7s_hA" -j cali-fip-snat -A cali-POSTROUTING -m comment --comment "cali:nYKhEzDlr11Jccal" -j cali-nat-outgoing -A cali-POSTROUTING -o tunl0 -m comment --comment "cali:SXWvdsbh4Mw7wOln" -m addrtype ! --src-type LOCAL --limit-iface-out -m addrtype --src-type LOCAL -j MASQUERADE -A cali-PREROUTING -m comment --comment "cali:r6XmIziWUJsdOK6Z" -j cali-fip-dnat -A cali-nat-outgoing -m comment --comment "cali:flqWnvo8yq4ULQLa" -m set --match-set cali40masq-ipam-pools src -m set ! --match-set cali40all-ipam-pools dst -j MASQUERADE COMMIT

Completed on Mon May 20 14:52:16 2019

Generated by iptables-save v1.6.1 on Mon May 20 14:52:16 2019

*filter :INPUT ACCEPT [2169:529022] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [2269:571858] :DOCKER - [0:0] :DOCKER-ISOLATION-STAGE-1 - [0:0] :DOCKER-ISOLATION-STAGE-2 - [0:0] :DOCKER-USER - [0:0] :KUBE-EXTERNAL-SERVICES - [0:0] :KUBE-FIREWALL - [0:0] :KUBE-FORWARD - [0:0] :KUBE-SERVICES - [0:0] :cali-FORWARD - [0:0] :cali-INPUT - [0:0] :cali-OUTPUT - [0:0] :cali-failsafe-in - [0:0] :cali-failsafe-out - [0:0] :cali-from-hep-forward - [0:0] :cali-from-host-endpoint - [0:0] :cali-from-wl-dispatch - [0:0] :cali-from-wl-dispatch-4 - [0:0] :cali-fw-cali40230d8b50d - [0:0] :cali-fw-cali4298ededa0f - [0:0] :cali-fw-cali5f87d3be0cd - [0:0] :cali-fw-cali61be66a3ac1 - [0:0] :cali-fw-cali8c512c7c978 - [0:0] :cali-fw-cali9755b38a74d - [0:0] :cali-fw-calia4d38832afe - [0:0] :cali-fw-calid87ccb441dd - [0:0] :cali-fw-calie970167072b - [0:0] :cali-pri-_DG_51VMe74aoc83nre - [0:0] :cali-pri-_LWv94PMvYKzzY3dhTl - [0:0] :cali-pri-_be-1GnaHI4zA9ZiNqb - [0:0] :cali-pri-_cHTcjx4Xi7rghi4C9T - [0:0] :cali-pri-kns.kube-system - [0:0] :cali-pri-kns.office-test - [0:0] :cali-pro-_DG_51VMe74aoc83nre - [0:0] :cali-pro-_LWv94PMvYKzzY3dhTl - [0:0] :cali-pro-_be-1GnaHI4zA9ZiNqb - [0:0] :cali-pro-_cHTcjx4Xi7rghi4C9T - [0:0] :cali-pro-kns.kube-system - [0:0] :cali-pro-kns.office-test - [0:0] :cali-to-hep-forward - [0:0] :cali-to-host-endpoint - [0:0] :cali-to-wl-dispatch - [0:0] :cali-to-wl-dispatch-4 - [0:0] :cali-tw-cali40230d8b50d - [0:0] :cali-tw-cali4298ededa0f - [0:0] :cali-tw-cali5f87d3be0cd - [0:0] :cali-tw-cali61be66a3ac1 - [0:0] :cali-tw-cali8c512c7c978 - [0:0] :cali-tw-cali9755b38a74d - [0:0] :cali-tw-calia4d38832afe - [0:0] :cali-tw-calid87ccb441dd - [0:0] :cali-tw-calie970167072b - [0:0] :cali-wl-to-host - [0:0] -A INPUT -m comment --comment "cali:Cz_u1IQiXIMmKD4c" -j cali-INPUT -A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES -A INPUT -j KUBE-FIREWALL -A FORWARD -m comment --comment "cali:wUHhoiAYhphO9Mso" -j cali-FORWARD -A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD -A FORWARD -j DOCKER-USER -A FORWARD -j DOCKER-ISOLATION-STAGE-1 -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -o docker0 -j DOCKER -A FORWARD -i docker0 ! -o docker0 -j ACCEPT -A FORWARD -i docker0 -o docker0 -j ACCEPT -A OUTPUT -m comment --comment "cali:tVnHkvAo15HuiPy0" -j cali-OUTPUT -A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A OUTPUT -j KUBE-FIREWALL -A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2 -A DOCKER-ISOLATION-STAGE-1 -j RETURN -A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP -A DOCKER-ISOLATION-STAGE-2 -j RETURN -A DOCKER-USER -j RETURN -A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP -A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT -A KUBE-FORWARD -s 10.233.64.0/18 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A KUBE-FORWARD -d 10.233.64.0/18 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A cali-FORWARD -m comment --comment "cali:vjrMJCRpqwy5oRoX" -j MARK --set-xmark 0x0/0xe0000 -A cali-FORWARD -m comment --comment "cali:A_sPAO0mcxbT9mOV" -m mark --mark 0x0/0x10000 -j cali-from-hep-forward -A cali-FORWARD -i cali+ -m comment --comment "cali:8ZoYfO5HKXWbB3pk" -j cali-from-wl-dispatch -A cali-FORWARD -o cali+ -m comment --comment "cali:jdEuaPBe14V2hutn" -j cali-to-wl-dispatch -A cali-FORWARD -m comment --comment "cali:12bc6HljsMKsmfr-" -j cali-to-hep-forward -A cali-FORWARD -m comment --comment "cali:MH9kMp5aNICL-Olv" -m comment --comment "Policy explicitly accepted packet." -m mark --mark 0x10000/0x10000 -j ACCEPT -A cali-INPUT -p ipencap -m comment --comment "cali:PajejrV4aFdkZojI" -m comment --comment "Allow IPIP packets from Calico hosts" -m set --match-set cali40all-hosts-net src -m addrtype --dst-type LOCAL -j ACCEPT -A cali-INPUT -p ipencap -m comment --comment "cali:_wjq-Yrma8Ly1Svo" -m comment --comment "Drop IPIP packets from non-Calico hosts" -j DROP -A cali-INPUT -i cali+ -m comment --comment "cali:8TZGxLWhEiz66wc" -g cali-wl-to-host -A cali-INPUT -m comment --comment "cali:6McIeIDvPdL6PE1T" -m mark --mark 0x10000/0x10000 -j ACCEPT -A cali-INPUT -m comment --comment "cali:YGPbrUms7NId8xVa" -j MARK --set-xmark 0x0/0xf0000 -A cali-INPUT -m comment --comment "cali:2gmY7Bg2i0i84Wk" -j cali-from-host-endpoint -A cali-INPUT -m comment --comment "cali:q-Vz2ZT9iGE331LL" -m comment --comment "Host endpoint policy accepted packet." -m mark --mark 0x10000/0x10000 -j ACCEPT -A cali-OUTPUT -m comment --comment "cali:Mq1_rAdXXH3YkrzW" -m mark --mark 0x10000/0x10000 -j ACCEPT -A cali-OUTPUT -o cali+ -m comment --comment "cali:69FkRTJDvD5Vu6Vl" -j RETURN -A cali-OUTPUT -p ipencap -m comment --comment "cali:AnEsmO6bDZbQntWW" -m comment --comment "Allow IPIP packets to other Calico hosts" -m set --match-set cali40all-hosts-net dst -m addrtype --src-type LOCAL -j ACCEPT -A cali-OUTPUT -m comment --comment "cali:9e9Uf3GU5tX--Lxy" -j MARK --set-xmark 0x0/0xf0000 -A cali-OUTPUT -m comment --comment "cali:OB2pzPrvQn6PC89t" -j cali-to-host-endpoint -A cali-OUTPUT -m comment --comment "cali:tvSSMDBWrme3CUqM" -m comment --comment "Host endpoint policy accepted packet." -m mark --mark 0x10000/0x10000 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:wWFQM43tJU7wwnFZ" -m multiport --dports 22 -j ACCEPT -A cali-failsafe-in -p udp -m comment --comment "cali:LwNV--R8MjeUYacw" -m multiport --dports 68 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:QOO5NUOqOSS1_Iw0" -m multiport --dports 179 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:cwZWoBSwVeIAZmVN" -m multiport --dports 2379 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:7FbNXT91kugE_upR" -m multiport --dports 2380 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:ywE9WYUBEpve70WT" -m multiport --dports 6666 -j ACCEPT -A cali-failsafe-in -p tcp -m comment --comment "cali:l-WQSVBf_lygPR0J" -m multiport --dports 6667 -j ACCEPT -A cali-failsafe-out -p udp -m comment --comment "cali:82hjfji-wChFhAqL" -m multiport --dports 53 -j ACCEPT -A cali-failsafe-out -p udp -m comment --comment "cali:TNM3RfEjbNr72hgH" -m multiport --dports 67 -j ACCEPT -A cali-failsafe-out -p tcp -m comment --comment "cali:ycxKitIl4u3dK0HR" -m multiport --dports 179 -j ACCEPT -A cali-failsafe-out -p tcp -m comment --comment "cali:hxjEWyxdkXXkdvut" -m multiport --dports 2379 -j ACCEPT -A cali-failsafe-out -p tcp -m comment --comment "cali:cA_GLtruuvG88KiO" -m multiport --dports 2380 -j ACCEPT -A cali-failsafe-out -p tcp -m comment --comment "cali:Sb1hkLYFMrKS6r01" -m multiport --dports 6666 -j ACCEPT -A cali-failsafe-out -p tcp -m comment --comment "cali:UwLSebGONJUG4yG-" -m multiport --dports 6667 -j ACCEPT -A cali-from-wl-dispatch -i cali4+ -m comment --comment "cali:2b7OEVWxujqPNMC9" -g cali-from-wl-dispatch-4 -A cali-from-wl-dispatch -i cali5f87d3be0cd -m comment --comment "cali:5qEjQczd8nR5zawe" -g cali-fw-cali5f87d3be0cd -A cali-from-wl-dispatch -i cali61be66a3ac1 -m comment --comment "cali:JDqywCzBn43DTXm7" -g cali-fw-cali61be66a3ac1 -A cali-from-wl-dispatch -i cali8c512c7c978 -m comment --comment "cali:7L3GH4X-SwpFShRz" -g cali-fw-cali8c512c7c978 -A cali-from-wl-dispatch -i cali9755b38a74d -m comment --comment "cali:yOcqr3lgN71-eAMK" -g cali-fw-cali9755b38a74d -A cali-from-wl-dispatch -i calia4d38832afe -m comment --comment "cali:3NWrD5qYeRFy9GCI" -g cali-fw-calia4d38832afe -A cali-from-wl-dispatch -i calid87ccb441dd -m comment --comment "cali:Nz5QWnNGjA3F0dFv" -g cali-fw-calid87ccb441dd -A cali-from-wl-dispatch -i calie970167072b -m comment --comment "cali:wuxlAnppHgmGAcRc" -g cali-fw-calie970167072b -A cali-from-wl-dispatch -m comment --comment "cali:OaKdnQ2aoWybnXo1" -m comment --comment "Unknown interface" -j DROP -A cali-from-wl-dispatch-4 -i cali40230d8b50d -m comment --comment "cali:tIqUkL3j9nssH04I" -g cali-fw-cali40230d8b50d -A cali-from-wl-dispatch-4 -i cali4298ededa0f -m comment --comment "cali:rGj1vymyoxgLPDtc" -g cali-fw-cali4298ededa0f -A cali-from-wl-dispatch-4 -m comment --comment "cali:y6fkdk1y_LznfsVP" -m comment --comment "Unknown interface" -j DROP -A cali-fw-cali40230d8b50d -m comment --comment "cali:haQlf5nEt88Wt0DT" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A cali-fw-cali40230d8b50d -m comment --comment "cali:ixccm-vTGjn_b9yE" -m conntrack --ctstate INVALID -j DROP -A cali-fw-cali40230d8b50d -m comment --comment "cali:P_fheGSIaXYtiEkM" -j MARK --set-xmark 0x0/0x10000 -A cali-fw-cali40230d8b50d -m comment --comment "cali:qfpkS-xh5dBrEuQ3" -j cali-pro-kns.office-test -A cali-fw-cali40230d8b50d -m comment --comment "cali:y83dgA2Xcx92-mD4" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-fw-cali40230d8b50d -m comment --comment "cali:uQqC_xLS1UahOoIe" -j cali-pro-_DG_51VMe74aoc83nre -A cali-fw-cali40230d8b50d -m comment --comment "cali:7t1QK9Z7fLZbeDg7" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-fw-cali40230d8b50d -m comment --comment "cali:kuKsLi3LRkmdFU52" -m comment --comment "Drop if no profiles matched" -j DROP -A cali-fw-cali4298ededa0f -m comment --comment "cali:sZ_7mJmKTKSunuIi" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A cali-fw-cali4298ededa0f -m comment --comment "cali:KYZGl1ZC5X9BFUPb" -m conntrack --ctstate INVALID -j DROP -A cali-fw-cali4298ededa0f -m comment --comment "cali:qzD4MKf2yHQ2Dv-a" -j MARK --set-xmark 0x0/0x10000 -A cali-fw-cali4298ededa0f -m comment --comment "cali:IiSrkc1m6OBOB9WG" -j cali-pro-kns.kube-system -A cali-fw-cali4298ededa0f -m comment --comment "cali:l7rrSLvI_vFXQRAc" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-fw-cali4298ededa0f -m comment --comment "cali:9FEz4skoJTVb6Goj" -j cali-pro-_LWv94PMvYKzzY3dhTl -A cali-fw-cali4298ededa0f -m comment --comment "cali:nOzkKYfK3kCL0q8m" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-fw-cali4298ededa0f -m comment --comment "cali:KPDsd-Df58x1L5b5" -m comment --comment "Drop if no profiles matched" -j DROP -A cali-fw-cali5f87d3be0cd -m comment --comment "cali:_rZOAP3N9PG3Rwyg" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A cali-fw-cali5f87d3be0cd -m comment --comment "cali:_0ULoWjJorqidW4Y" -m conntrack --ctstate INVALID -j DROP -A cali-fw-cali5f87d3be0cd -m comment --comment "cali:TrcZyNLpeStlKxZe" -j MARK --set-xmark 0x0/0x10000 -A cali-fw-cali5f87d3be0cd -m comment --comment "cali:xnngkZji1apvjPxF" -j cali-pro-kns.kube-system -A cali-fw-cali5f87d3be0cd -m comment --comment "cali:RZGnjZewr44TT3iy" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-fw-cali5f87d3be0cd -m comment --comment "cali:lmyAicvgbAPgWExI" -j cali-pro-_be-1GnaHI4zA9ZiNqb -A cali-fw-cali5f87d3be0cd -m comment --comment "cali:vZ1ZoBkANoHkap8" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-fw-cali5f87d3be0cd -m comment --comment "cali:RCcZibu867DuYv7" -m comment --comment "Drop if no profiles matched" -j DROP -A cali-fw-cali61be66a3ac1 -m comment --comment "cali:83jv6igm7v1sA--O" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A cali-fw-cali61be66a3ac1 -m comment --comment "cali:SSTAiUAF6wlTYIDm" -m conntrack --ctstate INVALID -j DROP -A cali-fw-cali61be66a3ac1 -m comment --comment "cali:OZJ6CT_Yo4eP9sJ7" -j MARK --set-xmark 0x0/0x10000 -A cali-fw-cali61be66a3ac1 -m comment --comment "cali:yUpS607_WI2YYB_T" -j cali-pro-kns.office-test -A cali-fw-cali61be66a3ac1 -m comment --comment "cali:4juXzYuZwyrnZGZA" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-fw-cali61be66a3ac1 -m comment --comment "cali:2N8GyyvC6ygz60-O" -j cali-pro-_DG_51VMe74aoc83nre -A cali-fw-cali61be66a3ac1 -m comment --comment "cali:bcbafDaaeGUl0d48" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-fw-cali61be66a3ac1 -m comment --comment "cali:iaSqjBVHY3ZnAIEg" -m comment --comment "Drop if no profiles matched" -j DROP -A cali-fw-cali8c512c7c978 -m comment --comment "cali:cojzgOrHlDwV044b" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A cali-fw-cali8c512c7c978 -m comment --comment "cali:_DnVO2Na57_TXiHh" -m conntrack --ctstate INVALID -j DROP -A cali-fw-cali8c512c7c978 -m comment --comment "cali:fL7Tct1UbbJOKKaL" -j MARK --set-xmark 0x0/0x10000 -A cali-fw-cali8c512c7c978 -m comment --comment "cali:6krXbbFz0b_2AzMn" -j cali-pro-kns.kube-system -A cali-fw-cali8c512c7c978 -m comment --comment "cali:msVvdtMUiHulE-rf" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-fw-cali8c512c7c978 -m comment --comment "cali:44kukaQr9IvMMj6J" -j cali-pro-_cHTcjx4Xi7rghi4C9T -A cali-fw-cali8c512c7c978 -m comment --comment "cali:KNjjpCJDEldkCFR3" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-fw-cali8c512c7c978 -m comment --comment "cali:ovs1qldlRJbxnnxU" -m comment --comment "Drop if no profiles matched" -j DROP -A cali-fw-cali9755b38a74d -m comment --comment "cali:1_qqmSHUxm6jrFEj" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A cali-fw-cali9755b38a74d -m comment --comment "cali:g21aU6J3rfnCgFd5" -m conntrack --ctstate INVALID -j DROP -A cali-fw-cali9755b38a74d -m comment --comment "cali:I8MkT6FHRF0j2T4n" -j MARK --set-xmark 0x0/0x10000 -A cali-fw-cali9755b38a74d -m comment --comment "cali:ZXP8ryZkbBD7TLaU" -j cali-pro-kns.office-test -A cali-fw-cali9755b38a74d -m comment --comment "cali:N5b-sG5mG_1kFyDj" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-fw-cali9755b38a74d -m comment --comment "cali:zqJdcv8kFOiTeDl0" -j cali-pro-_DG_51VMe74aoc83nre -A cali-fw-cali9755b38a74d -m comment --comment "cali:WakNiZf71Ri2gOMq" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-fw-cali9755b38a74d -m comment --comment "cali:U6yS9Z2G6iTKi-S7" -m comment --comment "Drop if no profiles matched" -j DROP -A cali-fw-calia4d38832afe -m comment --comment "cali:e8OTYpnCYdkNf0gh" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A cali-fw-calia4d38832afe -m comment --comment "cali:wYsgRmokKLfAZNdb" -m conntrack --ctstate INVALID -j DROP -A cali-fw-calia4d38832afe -m comment --comment "cali:DbXg3hZivMoC9BnQ" -j MARK --set-xmark 0x0/0x10000 -A cali-fw-calia4d38832afe -m comment --comment "cali:a54gzT-ALLHApvx9" -j cali-pro-kns.office-test -A cali-fw-calia4d38832afe -m comment --comment "cali:Cgsd6d8UH7cKGkGs" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-fw-calia4d38832afe -m comment --comment "cali:bCwz1Xp-wfY6DHwu" -j cali-pro-_DG51VMe74aoc83nre -A cali-fw-calia4d38832afe -m comment --comment "cali:IMfNIoZaHL2Emv8h" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-fw-calia4d38832afe -m comment --comment "cali:rY9IucFTYU84ho49" -m comment --comment "Drop if no profiles matched" -j DROP -A cali-fw-calid87ccb441dd -m comment --comment "cali:isDvgWlAm7AftmUF" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A cali-fw-calid87ccb441dd -m comment --comment "cali:hW8bf2Jex3tICp4R" -m conntrack --ctstate INVALID -j DROP -A cali-fw-calid87ccb441dd -m comment --comment "cali:Q8uW5gg9SFiGmde7" -j MARK --set-xmark 0x0/0x10000 -A cali-fw-calid87ccb441dd -m comment --comment "cali:IQbzBubNeTliZ1sj" -j cali-pro-kns.office-test -A cali-fw-calid87ccb441dd -m comment --comment "cali:qOzXnUaiXKolTXtb" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-fw-calid87ccb441dd -m comment --comment "cali:iWvLsIoUDtFQS0X" -j cali-pro-_DG_51VMe74aoc83nre -A cali-fw-calid87ccb441dd -m comment --comment "cali:9fkJrNI4upLMGIFW" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-fw-calid87ccb441dd -m comment --comment "cali:fGf7_C-IjHCwI7ak" -m comment --comment "Drop if no profiles matched" -j DROP -A cali-fw-calie970167072b -m comment --comment "cali:XtozC30MBH96drcb" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A cali-fw-calie970167072b -m comment --comment "cali:6ACj5WQYixKeSItK" -m conntrack --ctstate INVALID -j DROP -A cali-fw-calie970167072b -m comment --comment "cali:xBd0l7ggX11bBuMS" -j MARK --set-xmark 0x0/0x10000 -A cali-fw-calie970167072b -m comment --comment "cali:JWJ9TXvA0zDndZQw" -j cali-pro-kns.office-test -A cali-fw-calie970167072b -m comment --comment "cali:MDqqN-fRVqEhAKVq" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-fw-calie970167072b -m comment --comment "cali:DZMJxAydqM2e208D" -j cali-pro-_DG_51VMe74aoc83nre -A cali-fw-calie970167072b -m comment --comment "cali:PrpF-mt2B3fpwOe-" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-fw-calie970167072b -m comment --comment "cali:H6ZoAh4fBc6B6Ygh" -m comment --comment "Drop if no profiles matched" -j DROP -A cali-pri-kns.kube-system -m comment --comment "cali:zoH5gU6U55FKZxEo" -j MARK --set-xmark 0x10000/0x10000 -A cali-pri-kns.kube-system -m comment --comment "cali:bcGRIJcyOS9dgBiB" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-pri-kns.office-test -m comment --comment "cali:rAmH434l42GkXYbA" -j MARK --set-xmark 0x10000/0x10000 -A cali-pri-kns.office-test -m comment --comment "cali:_XMe7_NBGIzGxGBn" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-pro-kns.kube-system -m comment --comment "cali:-50oJuMfLVO3LkBk" -j MARK --set-xmark 0x10000/0x10000 -A cali-pro-kns.kube-system -m comment --comment "cali:ztVPKv1UYejNzm1g" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-pro-kns.office-test -m comment --comment "cali:GIrkgWZCx0oThJwX" -j MARK --set-xmark 0x10000/0x10000 -A cali-pro-kns.office-test -m comment --comment "cali:fEjLirQARksfa1Wn" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-to-wl-dispatch -o cali4+ -m comment --comment "cali:OvZFoUJ0shat6y2L" -g cali-to-wl-dispatch-4 -A cali-to-wl-dispatch -o cali5f87d3be0cd -m comment --comment "cali:N8jAFjK5YGNrw3EQ" -g cali-tw-cali5f87d3be0cd -A cali-to-wl-dispatch -o cali61be66a3ac1 -m comment --comment "cali:847gKlauoPfVkIzy" -g cali-tw-cali61be66a3ac1 -A cali-to-wl-dispatch -o cali8c512c7c978 -m comment --comment "cali:EabqigwtWoLhns4O" -g cali-tw-cali8c512c7c978 -A cali-to-wl-dispatch -o cali9755b38a74d -m comment --comment "cali:tlvSCWyrlodKl0iJ" -g cali-tw-cali9755b38a74d -A cali-to-wl-dispatch -o calia4d38832afe -m comment --comment "cali:bNb8iYjF7g3xqbvb" -g cali-tw-calia4d38832afe -A cali-to-wl-dispatch -o calid87ccb441dd -m comment --comment "cali:q6loTZpETxOBzM0e" -g cali-tw-calid87ccb441dd -A cali-to-wl-dispatch -o calie970167072b -m comment --comment "cali:oi_U6jHUs8_o4jVH" -g cali-tw-calie970167072b -A cali-to-wl-dispatch -m comment --comment "cali:tYDn1KDw-f6o7e0V" -m comment --comment "Unknown interface" -j DROP -A cali-to-wl-dispatch-4 -o cali40230d8b50d -m comment --comment "cali:BS4kw4JhIeX7upE9" -g cali-tw-cali40230d8b50d -A cali-to-wl-dispatch-4 -o cali4298ededa0f -m comment --comment "cali:GvlVA4TN8Rj4AC3a" -g cali-tw-cali4298ededa0f -A cali-to-wl-dispatch-4 -m comment --comment "cali:r3OuDX_UMdaH_c3T" -m comment --comment "Unknown interface" -j DROP -A cali-tw-cali40230d8b50d -m comment --comment "cali:PVW4qQHzmpc-3mmT" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A cali-tw-cali40230d8b50d -m comment --comment "cali:uONHeRTE-mXdKIbl" -m conntrack --ctstate INVALID -j DROP -A cali-tw-cali40230d8b50d -m comment --comment "cali:3uUjnjEZsLutraf7" -j MARK --set-xmark 0x0/0x10000 -A cali-tw-cali40230d8b50d -m comment --comment "cali:--teSj52j0ayMnps" -j cali-pri-kns.office-test -A cali-tw-cali40230d8b50d -m comment --comment "cali:EFKDtGmI3f_ntFTI" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-tw-cali40230d8b50d -m comment --comment "cali:SOSY2VB7JdItHi3w" -j cali-pri-_DG_51VMe74aoc83nre -A cali-tw-cali40230d8b50d -m comment --comment "cali:RnSrK0r7B70_iAYt" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-tw-cali40230d8b50d -m comment --comment "cali:ab7bqEQE5Hrg7A70" -m comment --comment "Drop if no profiles matched" -j DROP -A cali-tw-cali4298ededa0f -m comment --comment "cali:XwT2GF7UPyPlFTOK" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A cali-tw-cali4298ededa0f -m comment --comment "cali:FBEFyHT-M8IE82ln" -m conntrack --ctstate INVALID -j DROP -A cali-tw-cali4298ededa0f -m comment --comment "cali:FW2Ozy4H-kKmG5Rl" -j MARK --set-xmark 0x0/0x10000 -A cali-tw-cali4298ededa0f -m comment --comment "cali:gLjYqegntBS7fDvg" -j cali-pri-kns.kube-system -A cali-tw-cali4298ededa0f -m comment --comment "cali:YbbcebzK_gNtxxS-" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-tw-cali4298ededa0f -m comment --comment "cali:-3DFm0J-tug0Q9xI" -j cali-pri-_LWv94PMvYKzzY3dhTl -A cali-tw-cali4298ededa0f -m comment --comment "cali:UgOvKIwDSH0g-Wgt" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-tw-cali4298ededa0f -m comment --comment "cali:gGwBRtYzZDfEoq4o" -m comment --comment "Drop if no profiles matched" -j DROP -A cali-tw-cali5f87d3be0cd -m comment --comment "cali:OvSjD1w5jG-rWIdL" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A cali-tw-cali5f87d3be0cd -m comment --comment "cali:wlEGC3PsTb2IlJTv" -m conntrack --ctstate INVALID -j DROP -A cali-tw-cali5f87d3be0cd -m comment --comment "cali:vSRw7PA0biIutwnn" -j MARK --set-xmark 0x0/0x10000 -A cali-tw-cali5f87d3be0cd -m comment --comment "cali:xQ575CTiwJZqxqWh" -j cali-pri-kns.kube-system -A cali-tw-cali5f87d3be0cd -m comment --comment "cali:YJF9t3PSAZK_J9ys" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-tw-cali5f87d3be0cd -m comment --comment "cali:8CUSwcDGN_lO-Fwv" -j cali-pri-_be-1GnaHI4zA9ZiNqb -A cali-tw-cali5f87d3be0cd -m comment --comment "cali:KTIx4qEmL54S9ZxG" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-tw-cali5f87d3be0cd -m comment --comment "cali:1TjJH1uiSCIyb_Hp" -m comment --comment "Drop if no profiles matched" -j DROP -A cali-tw-cali61be66a3ac1 -m comment --comment "cali:2HGFvjeeRsrcUV2h" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A cali-tw-cali61be66a3ac1 -m comment --comment "cali:SzKz_hbcfH80rIhd" -m conntrack --ctstate INVALID -j DROP -A cali-tw-cali61be66a3ac1 -m comment --comment "cali:B99fFB17AqhPUhhb" -j MARK --set-xmark 0x0/0x10000 -A cali-tw-cali61be66a3ac1 -m comment --comment "cali:gkzpJptTW2d4EGZH" -j cali-pri-kns.office-test -A cali-tw-cali61be66a3ac1 -m comment --comment "cali:H5xDj7qCBhkvUS3V" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-tw-cali61be66a3ac1 -m comment --comment "cali:Yc5gdRQDwkm8tA16" -j cali-pri-_DG_51VMe74aoc83nre -A cali-tw-cali61be66a3ac1 -m comment --comment "cali:l7v9HigeKp_Nq7d5" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-tw-cali61be66a3ac1 -m comment --comment "cali:GU72yuQaaPXWZ8Vj" -m comment --comment "Drop if no profiles matched" -j DROP -A cali-tw-cali8c512c7c978 -m comment --comment "cali:wqjWGFPIsECIyDYe" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A cali-tw-cali8c512c7c978 -m comment --comment "cali:bqXXxmKQQGGiQ6uM" -m conntrack --ctstate INVALID -j DROP -A cali-tw-cali8c512c7c978 -m comment --comment "cali:ObbaNA1O9E5FTeb2" -j MARK --set-xmark 0x0/0x10000 -A cali-tw-cali8c512c7c978 -m comment --comment "cali:SQ19zHndjdJglSZe" -j cali-pri-kns.kube-system -A cali-tw-cali8c512c7c978 -m comment --comment "cali:v7gqeinCtv9p1QDh" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-tw-cali8c512c7c978 -m comment --comment "cali:BJl12g1DtbnNOkkv" -j cali-pri-_cHTcjx4Xi7rghi4C9T -A cali-tw-cali8c512c7c978 -m comment --comment "cali:0aIcQKipA6lin7wu" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-tw-cali8c512c7c978 -m comment --comment "cali:QEV1G3x_6x8cnDge" -m comment --comment "Drop if no profiles matched" -j DROP -A cali-tw-cali9755b38a74d -m comment --comment "cali:1PbEWoa_yaonEY8D" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A cali-tw-cali9755b38a74d -m comment --comment "cali:5ETtasPpRxU1Fsv4" -m conntrack --ctstate INVALID -j DROP -A cali-tw-cali9755b38a74d -m comment --comment "cali:W_haYFY3581PlVuv" -j MARK --set-xmark 0x0/0x10000 -A cali-tw-cali9755b38a74d -m comment --comment "cali:1EsEe93LfCqvJ9sL" -j cali-pri-kns.office-test -A cali-tw-cali9755b38a74d -m comment --comment "cali:6uYxFDhFol1kl8lb" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-tw-cali9755b38a74d -m comment --comment "cali:x-eYlaf9fi5DaCYh" -j cali-pri-_DG_51VMe74aoc83nre -A cali-tw-cali9755b38a74d -m comment --comment "cali:bBEBbUsjDyP8kfjb" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-tw-cali9755b38a74d -m comment --comment "cali:zjhNmG1LmO0Yrx68" -m comment --comment "Drop if no profiles matched" -j DROP -A cali-tw-calia4d38832afe -m comment --comment "cali:SSqfUL1nctOo7IXU" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A cali-tw-calia4d38832afe -m comment --comment "cali:dGE5JLfVgiRGWwEA" -m conntrack --ctstate INVALID -j DROP -A cali-tw-calia4d38832afe -m comment --comment "cali:uiEcOPnzIv_hnS6U" -j MARK --set-xmark 0x0/0x10000 -A cali-tw-calia4d38832afe -m comment --comment "cali:vkSSAArOONwnbAR4" -j cali-pri-kns.office-test -A cali-tw-calia4d38832afe -m comment --comment "cali:6nsUONIL2zjJZLkV" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-tw-calia4d38832afe -m comment --comment "cali:FzgVi3L1cPJwrg38" -j cali-pri-_DG_51VMe74aoc83nre -A cali-tw-calia4d38832afe -m comment --comment "cali:zUCM4mzgkiKDTrvd" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-tw-calia4d38832afe -m comment --comment "cali:fagzpBDf1kePl1xv" -m comment --comment "Drop if no profiles matched" -j DROP -A cali-tw-calid87ccb441dd -m comment --comment "cali:CHXQkV1fTjG2r70J" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A cali-tw-calid87ccb441dd -m comment --comment "cali:u1g1zxxeTQQUSoN7" -m conntrack --ctstate INVALID -j DROP -A cali-tw-calid87ccb441dd -m comment --comment "cali:fNvLdpDf5_txOAD-" -j MARK --set-xmark 0x0/0x10000 -A cali-tw-calid87ccb441dd -m comment --comment "cali:_RFiM91BFQfPd7S3" -j cali-pri-kns.office-test -A cali-tw-calid87ccb441dd -m comment --comment "cali:eHMMJCNpIhcib53N" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-tw-calid87ccb441dd -m comment --comment "cali:Yu_YBcpp_IP8Xd4L" -j cali-pri-_DG_51VMe74aoc83nre -A cali-tw-calid87ccb441dd -m comment --comment "cali:mc2tKPZ3PwV2ZaTl" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-tw-calid87ccb441dd -m comment --comment "cali:fz1RdKZeIh0kDOG5" -m comment --comment "Drop if no profiles matched" -j DROP -A cali-tw-calie970167072b -m comment --comment "cali:mvaPZG7R4Wemn3bO" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A cali-tw-calie970167072b -m comment --comment "cali:q8DQqWUnsiMhF3C1" -m conntrack --ctstate INVALID -j DROP -A cali-tw-calie970167072b -m comment --comment "cali:ylIvKPqLYorCGYyt" -j MARK --set-xmark 0x0/0x10000 -A cali-tw-calie970167072b -m comment --comment "cali:ulCovM6woozZM9dH" -j cali-pri-kns.office-test -A cali-tw-calie970167072b -m comment --comment "cali:ihmtw1pBVcFGXgxp" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-tw-calie970167072b -m comment --comment "cali:Xx0cFK5UNDMJsTgN" -j cali-pri-_DG_51VMe74aoc83nre -A cali-tw-calie970167072b -m comment --comment "cali:3tmE3hrcaXnaIP07" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN -A cali-tw-calie970167072b -m comment --comment "cali:pdbg_6YV0ZCunwQP" -m comment --comment "Drop if no profiles matched" -j DROP -A cali-wl-to-host -m comment --comment "cali:Ee9Sbo10IpVujdIY" -j cali-from-wl-dispatch -A cali-wl-to-host -m comment --comment "cali:sO1YJiY1b553biDi" -m comment --comment "Configured DefaultEndpointToHostAction" -j RETURN COMMIT

Completed on Mon May 20 14:52:16 2019

zacekjakub commented 5 years ago

Neither update to iptables v1.6.2 helped us.

athenabot commented 5 years ago

@dcbw If this issue has been triaged, please comment /remove-triage unresolved.

If you aren't able to handle this issue, consider unassigning yourself and/or adding the help-wanted label.

🤖 I am a bot run by vllry. 👩‍🔬

vinayitp commented 5 years ago

I am having a similar issue; did anyone find root cause and any solution? Thanks.

0verc1ocker commented 5 years ago

@vinayitp, for me it was two CNIs being deployed side-by-side on the same cluster and having IP allocation issues. It was pretty specific to my environment in my case.

gretel commented 5 years ago

happens to me using recent k3s, too.

vinayitp commented 5 years ago

Some more details:

Working Cluster I0614 19:02:33.319303 1 endpoints.go:234] Setting endpoints for "kube-system/kube-dns:dns" to [10.244.0.54:53 10.244.0.55:53] I0614 19:02:33.319313 1 endpoints.go:234] Setting endpoints for "kube-system/kube-dns:dns-tcp" to [10.244.0.54:53 10.244.0.55:53] I0614 19:02:33.319326 1 config.go:124] Calling handler.OnEndpointsAdd I0614 19:02:33.319331 1 config.go:124] Calling handler.OnEndpointsAdd I0614 19:02:33.319337 1 endpoints.go:234] Setting endpoints for "kube-system/metrics-server:" to [10.244.2.146:443] . . . I0614 19:02:33.404450 1 shared_informer.go:123] caches populated I0614 19:02:33.404476 1 controller_utils.go:1034] Caches are synced for service config controller I0614 19:02:33.404484 1 config.go:210] Calling handler.OnServiceSynced() I0614 19:02:33.404560 1 proxier.go:642] Not syncing iptables until Services and Endpoints have been received from master I0614 19:02:33.404572 1 proxier.go:638] syncProxyRules took 33.571µs I0614 19:02:33.413812 1 shared_informer.go:123] caches populated I0614 19:02:33.413837 1 controller_utils.go:1034] Caches are synced for endpoints config controller I0614 19:02:33.413844 1 config.go:110] Calling handler.OnEndpointsSynced() I0614 19:02:33.413915 1 service.go:309] Adding new service port "default/kubernetes:https" at 10.96.0.1:443/TCP I0614 19:02:33.413934 1 service.go:309] Adding new service port "kube-system/kube-dns:dns-tcp" at 10.96.0.10:53/TCP I0614 19:02:33.413943 1 service.go:309] Adding new service port "kube-system/kube-dns:dns" at 10.96.0.10:53/UDP I0614 19:02:33.413952 1 service.go:309] Adding new service port "kube-system/metrics-server:" at 10.110.245.28:443/TCP I0614 19:02:33.413999 1 proxier.go:656] Stale udp service kube-system/kube-dns:dns -> 10.96.0.10 I0614 19:02:33.414007 1 proxier.go:661] Syncing iptables rules . . . I0614 19:02:33.442445 1 iptables.go:391] running iptables-restore [--noflush --counters] I0614 19:02:33.450207 1 healthcheck.go:235] Not saving endpoints for unknown healthcheck "kube-system/metrics-server" I0614 19:02:33.450230 1 healthcheck.go:235] Not saving endpoints for unknown healthcheck "default/kubia" I0614 19:02:33.451959 1 proxier.go:638] syncProxyRules took 38.062055ms I0614 19:02:33.583831 1 config.go:141] Calling handler.OnEndpointsUpdate

Non-working cluter

I0614 13:35:33.755047 1 endpoints.go:273] Setting endpoints for "kube-system/kube-dns:dns" to [10.244.0.12:53 10.244.0.13:53] I0614 13:35:33.755056 1 endpoints.go:273] Setting endpoints for "kube-system/kube-dns:dns-tcp" to [10.244.0.12:53 10.244.0.13:53] I0614 13:35:33.755066 1 endpoints.go:273] Setting endpoints for "kube-system/kube-dns:metrics" to [10.244.0.12:9153 10.244.0.13:9153] I0614 13:35:33.755076 1 config.go:124] Calling handler.OnEndpointsAdd

. . . I0614 13:35:33.851867 1 shared_informer.go:123] caches populated I0614 13:35:33.851903 1 controller_utils.go:1034] Caches are synced for service config controller I0614 13:35:33.851912 1 config.go:210] Calling handler.OnServiceSynced() I0614 13:35:33.852090 1 proxier.go:661] Not syncing iptables until Services and Endpoints have been received from master I0614 13:35:33.852104 1 proxier.go:657] syncProxyRules took 114.96µs I0614 13:35:33.851872 1 shared_informer.go:123] caches populated I0614 13:35:33.852119 1 controller_utils.go:1034] Caches are synced for endpoints config controller I0614 13:35:33.852124 1 config.go:110] Calling handler.OnEndpointsSynced() I0614 13:35:33.852200 1 service.go:328] Adding new service port "kube-system/kube-dns:dns" at 10.244.240.10:53/UDP I0614 13:35:33.852218 1 service.go:328] Adding new service port "kube-system/kube-dns:dns-tcp" at 10.244.240.10:53/TCP I0614 13:35:33.852227 1 service.go:328] Adding new service port "kube-system/kube-dns:metrics" at 10.244.240.10:9153/TCP I0614 13:35:33.852238 1 service.go:328] Adding new service port "default/kubernetes:https" at 10.244.240.1:443/TCP I0614 13:35:33.852322 1 proxier.go:675] Stale udp service kube-system/kube-dns:dns -> 10.244.240.10 I0614 13:35:33.852349 1 proxier.go:683] Syncing iptables rules . . . I0614 13:35:33.878307 1 iptables.go:391] running iptables-restore [--noflush --counters] I0614 13:35:33.882819 1 proxier.go:1375] Network programming took 162590.882750 seconds I0614 13:35:33.882848 1 healthcheck.go:235] Not saving endpoints for unknown healthcheck "kube-system/kube-dns" I0614 13:35:33.885346 1 proxier.go:657] syncProxyRules took 33.191807ms

healthcheck.go code function:

func (hcs *server) SyncEndpoints(newEndpoints map[types.NamespacedName]int) error { hcs.lock.Lock() defer hcs.lock.Unlock()

for nsn, count := range newEndpoints {
    **if hcs.services[nsn] == nil {
        glog.V(3).Infof("Not saving endpoints for unknown healthcheck %q", nsn.String())**
        continue
    }
vinayitp commented 5 years ago

here is some more details:

for nsn, count := range newEndpoints { **if hcs.services[nsn] == nil

nsn is kube-system/kube-dns

it seems there is no corresponding service in hcs.services as a result below code is returning 'nil'

Please advise. Thanks.

athenabot commented 5 years ago

@dcbw If this issue has been triaged, please comment /remove-triage unresolved.

If you aren't able to handle this issue, consider unassigning yourself and/or adding the help-wanted label.

🤖 I am a bot run by vllry. 👩‍🔬

dcbw commented 5 years ago

@vinayitp this looks like the problem:

I0614 13:35:33.882819 1 proxier.go:1375] Network programming took 162590.882750 seconds

That is a really long time for iptables-restore to run and not at all expected. What iptables and kernel versions do you have, and how many services/pods int he cluster?

dcbw commented 5 years ago

/remove-triage unresolved

Seljuke commented 5 years ago

I deployed two Kubernetes cluster with Kubespray, one with iptables and one with ipvs. The ipvs one worked as expected but at the iptables cluster it didn't work.

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

Aisuko commented 4 years ago

/remove-lifecycle stale

onprema commented 4 years ago

kube-proxy logs:

I1204 23:13:00.748108       1 flags.go:33] FLAG: --alsologtostderr="false"
I1204 23:13:00.748151       1 flags.go:33] FLAG: --bind-address="0.0.0.0"
I1204 23:13:00.748158       1 flags.go:33] FLAG: --cleanup="false"
I1204 23:13:00.748169       1 flags.go:33] FLAG: --cleanup-iptables="false"
I1204 23:13:00.748173       1 flags.go:33] FLAG: --cleanup-ipvs="true"
I1204 23:13:00.748177       1 flags.go:33] FLAG: --cluster-cidr=""
I1204 23:13:00.748183       1 flags.go:33] FLAG: --config="/var/lib/kube-proxy-config/config"
I1204 23:13:00.748188       1 flags.go:33] FLAG: --config-sync-period="15m0s"
I1204 23:13:00.748194       1 flags.go:33] FLAG: --conntrack-max="0"
I1204 23:13:00.748200       1 flags.go:33] FLAG: --conntrack-max-per-core="32768"
I1204 23:13:00.748204       1 flags.go:33] FLAG: --conntrack-min="131072"
I1204 23:13:00.748209       1 flags.go:33] FLAG: --conntrack-tcp-timeout-close-wait="1h0m0s"
I1204 23:13:00.748214       1 flags.go:33] FLAG: --conntrack-tcp-timeout-established="24h0m0s"
I1204 23:13:00.748218       1 flags.go:33] FLAG: --feature-gates=""
I1204 23:13:00.748230       1 flags.go:33] FLAG: --healthz-bind-address="0.0.0.0:10256"
I1204 23:13:00.748235       1 flags.go:33] FLAG: --healthz-port="10256"
I1204 23:13:00.748239       1 flags.go:33] FLAG: --help="false"
I1204 23:13:00.748244       1 flags.go:33] FLAG: --hostname-override=""
I1204 23:13:00.748248       1 flags.go:33] FLAG: --iptables-masquerade-bit="14"
I1204 23:13:00.748252       1 flags.go:33] FLAG: --iptables-min-sync-period="0s"
I1204 23:13:00.748256       1 flags.go:33] FLAG: --iptables-sync-period="30s"
I1204 23:13:00.748261       1 flags.go:33] FLAG: --ipvs-exclude-cidrs="[]"
I1204 23:13:00.748269       1 flags.go:33] FLAG: --ipvs-min-sync-period="0s"
I1204 23:13:00.748274       1 flags.go:33] FLAG: --ipvs-scheduler=""
I1204 23:13:00.748278       1 flags.go:33] FLAG: --ipvs-strict-arp="false"
I1204 23:13:00.748282       1 flags.go:33] FLAG: --ipvs-sync-period="30s"
I1204 23:13:00.748286       1 flags.go:33] FLAG: --kube-api-burst="10"
I1204 23:13:00.748291       1 flags.go:33] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
I1204 23:13:00.748296       1 flags.go:33] FLAG: --kube-api-qps="5"
I1204 23:13:00.748302       1 flags.go:33] FLAG: --kubeconfig=""
I1204 23:13:00.748306       1 flags.go:33] FLAG: --log-backtrace-at=":0"
I1204 23:13:00.748317       1 flags.go:33] FLAG: --log-dir=""
I1204 23:13:00.748321       1 flags.go:33] FLAG: --log-file=""
I1204 23:13:00.748325       1 flags.go:33] FLAG: --log-flush-frequency="5s"
I1204 23:13:00.748330       1 flags.go:33] FLAG: --logtostderr="true"
I1204 23:13:00.748334       1 flags.go:33] FLAG: --masquerade-all="false"
I1204 23:13:00.748338       1 flags.go:33] FLAG: --master=""
I1204 23:13:00.748343       1 flags.go:33] FLAG: --metrics-bind-address="127.0.0.1:10249"
I1204 23:13:00.748348       1 flags.go:33] FLAG: --metrics-port="10249"
I1204 23:13:00.748352       1 flags.go:33] FLAG: --nodeport-addresses="[]"
I1204 23:13:00.748362       1 flags.go:33] FLAG: --oom-score-adj="-999"
I1204 23:13:00.748366       1 flags.go:33] FLAG: --profiling="false"
I1204 23:13:00.748371       1 flags.go:33] FLAG: --proxy-mode=""
I1204 23:13:00.748376       1 flags.go:33] FLAG: --proxy-port-range=""
I1204 23:13:00.748381       1 flags.go:33] FLAG: --resource-container="/kube-proxy"
I1204 23:13:00.748386       1 flags.go:33] FLAG: --skip-headers="false"
I1204 23:13:00.748390       1 flags.go:33] FLAG: --stderrthreshold="2"
I1204 23:13:00.748394       1 flags.go:33] FLAG: --udp-timeout="250ms"
I1204 23:13:00.748398       1 flags.go:33] FLAG: --v="2"
I1204 23:13:00.748403       1 flags.go:33] FLAG: --version="false"
I1204 23:13:00.748412       1 flags.go:33] FLAG: --vmodule=""
I1204 23:13:00.748416       1 flags.go:33] FLAG: --write-config-to=""
I1204 23:13:00.749318       1 feature_gate.go:226] feature gates: &{map[]}
I1204 23:13:01.108907       1 server_others.go:146] Using iptables Proxier.
W1204 23:13:01.109008       1 proxier.go:319] clusterCIDR not specified, unable to distinguish between internal and external traffic
I1204 23:13:01.109129       1 iptables.go:200] Could not connect to D-Bus system bus: dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory
I1204 23:13:01.109172       1 server.go:562] Version: v1.14.6
I1204 23:13:01.113552       1 server.go:578] Running in resource-only container "/kube-proxy"
I1204 23:13:01.113954       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I1204 23:13:01.113981       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1204 23:13:01.114046       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I1204 23:13:01.114088       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I1204 23:13:01.114228       1 config.go:102] Starting endpoints config controller
I1204 23:13:01.114252       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I1204 23:13:01.114261       1 config.go:202] Starting service config controller
I1204 23:13:01.114291       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I1204 23:13:01.214405       1 controller_utils.go:1034] Caches are synced for service config controller
I1204 23:13:01.214582       1 proxier.go:661] Not syncing iptables until Services and Endpoints have been received from master
I1204 23:13:01.214604       1 controller_utils.go:1034] Caches are synced for endpoints config controller
I1204 23:13:01.214662       1 service.go:332] Adding new service port "default/kubernetes:https" at 10.100.0.1:443/TCP
I1204 23:13:01.214678       1 service.go:332] Adding new service port "kube-system/kube-dns:dns" at 10.100.0.10:53/UDP
I1204 23:13:01.214685       1 service.go:332] Adding new service port "kube-system/kube-dns:dns-tcp" at 10.100.0.10:53/TCP
I1204 23:13:01.214744       1 proxier.go:675] Stale udp service kube-system/kube-dns:dns -> 10.100.0.10

iptables-save:

# Generated by iptables-save v1.4.21 on Wed Dec  4 23:56:18 2019
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [11:676]
:POSTROUTING ACCEPT [11:676]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-43JXX3WIN2SOR6EG - [0:0]
:KUBE-SEP-BSHRGJUUKHTTGYIF - [0:0]
:KUBE-SEP-JP2M66LUY5YAQ4M2 - [0:0]
:KUBE-SEP-K7LSCPAICZ3AW7AD - [0:0]
:KUBE-SEP-L2F7FPCFKWKVX3RC - [0:0]
:KUBE-SEP-YZ73XLPKS2NWC2Q3 - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-43JXX3WIN2SOR6EG -s 192.168.78.125/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-43JXX3WIN2SOR6EG -p udp -m udp -j DNAT --to-destination 192.168.78.125:53
-A KUBE-SEP-BSHRGJUUKHTTGYIF -s 192.168.171.181/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-BSHRGJUUKHTTGYIF -p tcp -m tcp -j DNAT --to-destination 192.168.171.181:443
-A KUBE-SEP-JP2M66LUY5YAQ4M2 -s 192.168.158.157/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-JP2M66LUY5YAQ4M2 -p tcp -m tcp -j DNAT --to-destination 192.168.158.157:443
-A KUBE-SEP-K7LSCPAICZ3AW7AD -s 192.168.64.42/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-K7LSCPAICZ3AW7AD -p udp -m udp -j DNAT --to-destination 192.168.64.42:53
-A KUBE-SEP-L2F7FPCFKWKVX3RC -s 192.168.78.125/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-L2F7FPCFKWKVX3RC -p tcp -m tcp -j DNAT --to-destination 192.168.78.125:53
-A KUBE-SEP-YZ73XLPKS2NWC2Q3 -s 192.168.64.42/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-YZ73XLPKS2NWC2Q3 -p tcp -m tcp -j DNAT --to-destination 192.168.64.42:53
-A KUBE-SERVICES -d 10.100.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.100.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.100.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-YZ73XLPKS2NWC2Q3
-A KUBE-SVC-ERIFXISQEP7F7OF4 -j KUBE-SEP-L2F7FPCFKWKVX3RC
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-JP2M66LUY5YAQ4M2
-A KUBE-SVC-NPX46M4PTMTKRN6Y -j KUBE-SEP-BSHRGJUUKHTTGYIF
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-K7LSCPAICZ3AW7AD
-A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-43JXX3WIN2SOR6EG
COMMIT
# Completed on Wed Dec  4 23:56:18 2019
# Generated by iptables-save v1.4.21 on Wed Dec  4 23:56:18 2019
*filter
:INPUT ACCEPT [115:21426]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [106:9841]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -j DOCKER-USER
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
COMMIT
# Completed on Wed Dec  4 23:56:18 2019
k3a commented 4 years ago

I observed a similar behaviour on a test cluster made by Kops 1.15.0 (git-9992b4055), Kubernetes v1.15.6. Lots of services failed due to the broken kube-dns.

From a newly-created Alpine container, I was able to do DNS resolves using PodIPs of running kube-dns pods. The use of ClusterIP of kube-dns service 100.64.0.10 was not possible, though.

After logging to the node where kubedns pods were running, I could see reject rules in the iptables with following comments:

kube-system/kube-dns:dns-tcp has no endpoints
kube-system/kube-dns:dns has no endpoints

Sending HUP to kube-proxy restarted it with no change in iptables.

Restarting kubelet fixed the problem somehow. I don't expect this to be a permanent fix, though.

Kops run kubelet with these options:

/usr/local/bin/kubelet --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-root=/ --client-ca-file=/srv/kubernetes/ca.crt --cloud-provider=aws --cluster-dns=100.64.0.10 --cluster-domain=cluster.local --enable-debugging-handlers=true --eviction-hard=memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%,imagefs.available<10%,imagefs.inodesFree<5% --feature-gates=ExperimentalCriticalPodAnnotation=true --hostname-override=ip-172-20-38-102.eu-central-1.compute.internal --kubeconfig=/var/lib/kubelet/kubeconfig --network-plugin-mtu=9001 --network-plugin=kubenet --node-labels=kops.k8s.io/instancegroup=nodes,kubernetes.io/role=node,node-role.kubernetes.io/node= --non-masquerade-cidr=100.64.0.0/10 --pod-infra-container-image=k8s.gcr.io/pause-amd64:3.0 --pod-manifest-path=/etc/kubernetes/manifests --register-schedulable=true --v=2 --volume-plugin-dir=/usr/libexec/kubernetes/kubelet-plugins/volume/exec/ --cni-bin-dir=/opt/cni/bin/ --cni-conf-dir=/etc/cni/net.d/ --cni-bin-dir=/opt/cni/bin/

Maybe it is Kops configuration problem, but I the root cause is unknown to me and I would be worried to run such cluster in production. :(

ipt-save-fix.log ipt-save.log kube-proxy.log

MortezaBashsiz commented 4 years ago

The same issue with kubernetes about the network. after investigating i found out that the problem is related to iptables rules.

k8s version : v1.17.4 iptables version : v1.8.4 OS : Centos 7

Screenshot from 2020-03-23 09-49-47

Finally i forced to run iptables command manually and my problem solved.

iptables -F FORWARD iptables -F POSTROUTING -t nat iptables -A FORWARD -j ACCEPT iptables -t nat -A POSTROUTING -j MASQUERADE

NOTE: But i do not recommend to use these iptables commands to solve your problems. Because it will cause a lots of security issue

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

dcbw commented 4 years ago

If you encounter this issue again, we need two things for debugging:

  1. iptables-save output around the time the problem occurs, on the node that it occurs on
  2. kube-proxy log output
dcbw commented 4 years ago

RE @k3a's comment about v1.15, there are a lot of:

I1230 01:05:53.224716 1 proxier.go:693] Stale udp service kube-system/kube-dns:dns -> 100.64.0.10

in the kube-proxy logs, which means that kube-dns pods are likely dying or otherwise restarting. THat message is printed when the number of available pods goes from 0 -> 1+, so clearly at points in time there are no kube-dns pods running (or ready and visible to kube-proxy on each node).

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 4 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 4 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes/kubernetes/issues/61005#issuecomment-688524160): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.