weaveworks / weave

Simple, resilient multi-host containers networking and more.
https://www.weave.works
Apache License 2.0
6.62k stars 670 forks source link

On a network with multiple interfaces weave uses different ones on different hosts #3588

Closed yakneens closed 5 years ago

yakneens commented 5 years ago

What you expected to happen?

Created kubernetes cluster of 1 master and 2 nodes on Openstack using kubeadm and weave as the network. All VMs have multiple network interfaces, i.e. -

[root@tracker net.d]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:9a:84:ea brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.15/24 brd 192.168.0.255 scope global dynamic eth0
       valid_lft 76209sec preferred_lft 76209sec
    inet6 fe80::f816:3eff:fe9a:84ea/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:1a:52:f8 brd ff:ff:ff:ff:ff:ff
    inet 10.35.104.5/24 brd 10.35.104.255 scope global dynamic eth1
       valid_lft 64260sec preferred_lft 64260sec
    inet6 fe80::f816:3eff:fe1a:52f8/64 scope link 
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:2c:bd:88 brd ff:ff:ff:ff:ff:ff
    inet 10.35.105.10/24 brd 10.35.105.255 scope global dynamic eth2
       valid_lft 65664sec preferred_lft 65664sec
    inet6 fe80::f816:3eff:fe2c:bd88/64 scope link 
       valid_lft forever preferred_lft forever
5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:ff:6a:ee brd ff:ff:ff:ff:ff:ff
    inet 10.35.110.13/24 brd 10.35.110.255 scope global dynamic eth3
       valid_lft 60575sec preferred_lft 60575sec
    inet6 fe80::f816:3eff:feff:6aee/64 scope link 
       valid_lft forever preferred_lft forever
6: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:f8:50:0c:69 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:f8ff:fe50:c69/64 scope link 
       valid_lft forever preferred_lft forever

Expecting all nodes to be joined in a single network.

What happened?

Weave does not seem to select the same interface on all nodes, resulting in a failure to peer. On two nodes weave seems to have selected eth0 (which is desired) and the two nodes ended up peered:

/home/weave # ./weave --local status 

        Version: 2.5.1 (up to date; next check at 2019/01/27 19:55:34)

        Service: router
       Protocol: weave 1..2
           Name: e2:fd:43:23:90:d2(worker-3)
     Encryption: disabled
  PeerDiscovery: enabled
        Targets: 3
    Connections: 3 (1 established, 2 failed)
          Peers: 2 (with 2 established connections)
 TrustedSubnets: none

        Service: ipam
         Status: ready
          Range: 10.32.0.0/16
  DefaultSubnet: 10.32.0.0/16

/home/weave # ./weave --local status connections
-> 192.168.0.72:6783     established fastdp 6a:6c:c9:b5:18:8a(getter-test) mtu=1376
-> 10.35.104.5:6783      failed      IP allocation was seeded by different peers (received: [4a:ba:4c:fa:5c:b3(tracker)], ours: [6a:6c:c9:b5:18:8a(getter-test)]), retry: 2019-01-27 17:00:44.356836268 +0000 UTC m=+6152.062472741 
-> 192.168.0.61:6783     failed      cannot connect to ourself, retry: never 
/home/weave # ./weave --local status connections
-> 192.168.0.72:6783     established fastdp 6a:6c:c9:b5:18:8a(getter-test) mtu=1376
-> 10.35.104.5:6783      failed      IP allocation was seeded by different peers (received: [4a:ba:4c:fa:5c:b3(tracker)], ours: [6a:6c:c9:b5:18:8a(getter-test)]), retry: 2019-01-27 17:06:26.840285977 +0000 UTC m=+6494.545922469 
-> 192.168.0.61:6783     failed      cannot connect to ourself, retry: never 

But on the master node interface eth3 seems to have been selected, resulting the master not peering with other nodes.

/home/weave # ./weave --local status peers
4a:ba:4c:fa:5c:b3(tracker)
/home/weave # ./weave --local status connections
-> 10.35.104.5:6783      failed      cannot connect to ourself, retry: never 
-> 192.168.0.72:6783     failed      IP allocation was seeded by different peers (received: [6a:6c:c9:b5:18:8a(getter-test)], ours: [4a:ba:4c:fa:5c:b3(tracker)]), retry: 2019-01-27 17:05:51.672464682 +0000 UTC m=+11913.849305376 

As a result cross-node communication is broken and pods end up getting assigned with conflicting IPs.

NAMESPACE     NAME                                    READY   STATUS             RESTARTS   AGE     IP             NODE          NOMINATED NODE   READINESS GATES                                                                                                            
default       redis-client                            1/1     Running            0          70m     10.32.192.2    worker-3      <none>           <none>                                                                                                                     
default       redis-master-0                          1/1     Running            0          61m     10.32.0.3      tracker       <none>           <none>                                                                                                                     
default       redis-slave-56b9d766d5-qkw4n            0/1     CrashLoopBackOff   20         61m     10.32.192.1    worker-3      <none>           <none>                                                                                                                     
kube-system   coredns-86c58d9df4-dnk5s                1/1     Running            0          147m    10.32.0.4      getter-test   <none>           <none>                                                                                                                     
kube-system   coredns-86c58d9df4-wc79m                1/1     Running            0          147m    10.32.0.3      getter-test   <none>           <none>                                                                                                                     
kube-system   etcd-tracker                            1/1     Running            0          3h30m   192.168.0.15   tracker       <none>           <none>                                                                                                                     
kube-system   kube-apiserver-tracker                  1/1     Running            0          3h30m   192.168.0.15   tracker       <none>           <none>                                                                                                                     
kube-system   kube-controller-manager-tracker         1/1     Running            0          3h30m   192.168.0.15   tracker       <none>           <none>                                                                                                                     
kube-system   kube-proxy-jgvwl                        1/1     Running            0          126m    192.168.0.61   worker-3      <none>           <none>                                                                                                                     
kube-system   kube-proxy-s5qxv                        1/1     Running            0          3h30m   192.168.0.15   tracker       <none>           <none>                                                                                                                     
kube-system   kube-proxy-vvv29                        1/1     Running            0          158m    192.168.0.72   getter-test   <none>           <none>                                                                                                                     
kube-system   kube-scheduler-tracker                  1/1     Running            0          3h30m   192.168.0.15   tracker       <none>           <none>                                                                                                                     
kube-system   kubernetes-dashboard-78744cfc45-btqzr   1/1     Running            0          81m     10.32.0.2      tracker       <none>           <none>                                                                                                                     
kube-system   tiller-deploy-7fd664c764-w7zhv          1/1     Running            0          90m     10.32.0.2      getter-test   <none>           <none>                                                                                                                     
kube-system   weave-net-fk586                         2/2     Running            0          115m    192.168.0.61   worker-3      <none>           <none>                                                                                                                     
kube-system   weave-net-mcvpd                         2/2     Running            0          158m    192.168.0.72   getter-test   <none>           <none>                                                                                                                     
kube-system   weave-net-ppdxh                         2/2     Running            0          3h26m   192.168.0.15   tracker       <none>           <none>  

How to reproduce it?

Anything else we need to know?

Kubeadm config

[root@tracker initial]# cat kubeadm_config_map.yaml 
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
  kubeletExtraArgs:
    cloud-provider: "openstack"
    cloud-config: "/etc/kubernetes/cloud.conf"
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.13.0
apiServer:
  extraArgs:
    cloud-provider: "openstack"
    cloud-config: "/etc/kubernetes/cloud.conf"
  extraVolumes:
  - name: cloud
    hostPath: "/etc/kubernetes/cloud.conf"
    mountPath: "/etc/kubernetes/cloud.conf"
controllerManager:
  extraArgs:
    cloud-provider: "openstack"
    cloud-config: "/etc/kubernetes/cloud.conf"
  extraVolumes:
  - name: cloud
    hostPath: "/etc/kubernetes/cloud.conf"
    mountPath: "/etc/kubernetes/cloud.conf"
networking:
  dnsDomain: cluster.local
  podSubnet: 10.32.0.0/16 

Versions:

$ weave version
weave script 2.5.1
weave 2.5.1

$ docker version
[root@tracker initial]# docker version
Client:
 Version:           18.09.1
 API version:       1.39
 Go version:        go1.10.6
 Git commit:        4c52b90
 Built:             Wed Jan  9 19:35:01 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.1
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       4c52b90
  Built:            Wed Jan  9 19:06:30 2019
  OS/Arch:          linux/amd64
  Experimental:     false

$ uname -a
[root@tracker initial]# uname -a
Linux tracker 3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 20 20:32:50 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

$ kubectl version
[root@tracker initial]# kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:35:51Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T20:56:12Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}

Logs:

weave log on master node

[root@tracker initial]# kubectl  logs weave-net-ppdxh weave -n kube-system                                                                                                                                                                                                   
INFO: 2019/01/27 13:47:17.940957 Command line options: map[name:4a:ba:4c:fa:5c:b3 nickname:tracker no-dns:true http-addr:127.0.0.1:6784 ipalloc-init:consensus=1 metrics-addr:0.0.0.0:6782 conn-limit:100 datapath:datapath db-prefix:/weavedb/weave-net host-root:/host dock
er-api: expect-npc:true ipalloc-range:10.32.0.0/16 port:6783]
INFO: 2019/01/27 13:47:17.941037 weave  2.5.1
INFO: 2019/01/27 13:47:17.942417 failed to create weave-test-commentc9f1fd67; disabling comment support
INFO: 2019/01/27 13:47:19.132355 Bridge type is bridged_fastdp
INFO: 2019/01/27 13:47:19.132388 Communication between peers is unencrypted.
INFO: 2019/01/27 13:47:19.143191 Our name is 4a:ba:4c:fa:5c:b3(tracker)
INFO: 2019/01/27 13:47:19.143236 Launch detected - using supplied peer list: [10.35.104.5]
INFO: 2019/01/27 13:47:19.151678 Unable to fetch ConfigMap kube-system/weave-net to infer unique cluster ID
INFO: 2019/01/27 13:47:19.151709 Checking for pre-existing addresses on weave bridge
INFO: 2019/01/27 13:47:19.178504 [allocator 4a:ba:4c:fa:5c:b3] No valid persisted data
INFO: 2019/01/27 13:47:19.185080 [allocator 4a:ba:4c:fa:5c:b3] Initialising via deferred consensus
INFO: 2019/01/27 13:47:19.185189 Sniffing traffic on datapath (via ODP)
INFO: 2019/01/27 13:47:19.185705 ->[10.35.104.5:6783] attempting connection
INFO: 2019/01/27 13:47:19.186051 ->[10.35.104.5:55442] connection accepted
INFO: 2019/01/27 13:47:19.186673 ->[10.35.104.5:6783|4a:ba:4c:fa:5c:b3(tracker)]: connection shutting down due to error: cannot connect to ourself
INFO: 2019/01/27 13:47:19.186758 ->[10.35.104.5:55442|4a:ba:4c:fa:5c:b3(tracker)]: connection shutting down due to error: cannot connect to ourself
INFO: 2019/01/27 13:47:19.189519 Listening for HTTP control messages on 127.0.0.1:6784
INFO: 2019/01/27 13:47:19.189763 Listening for metrics requests on 0.0.0.0:6782
INFO: 2019/01/27 13:47:19.900839 [kube-peers] Added myself to peer list &{[{4a:ba:4c:fa:5c:b3 tracker}]}
DEBU: 2019/01/27 13:47:19.906795 [kube-peers] Nodes that have disappeared: map[]
10.32.0.1
10.35.104.5
DEBU: 2019/01/27 13:47:19.992321 registering for updates for node delete events
WARN: 2019/01/27 13:55:04.414700 [allocator]: Delete: no addresses for e2b5d304bb50060abcd032d6e7bd114205cb0253efa0ad421646cb47f8d326a2
WARN: 2019/01/27 13:55:08.420546 [allocator]: Delete: no addresses for e3d60a90ada57e3b083511ef14f1f4ad79f419b2e0cc67aee670315ef907c7ed
DEBU: 2019/01/27 14:28:24.900900 [kube-peers] Nodes that have disappeared: map[0a:7d:ad:97:35:a6:{0a:7d:ad:97:35:a6 job-queue}]
DEBU: 2019/01/27 14:28:24.900923 [kube-peers] Preparing to remove disappeared peer 0a:7d:ad:97:35:a6
DEBU: 2019/01/27 14:28:24.900933 [kube-peers] Noting I plan to remove  0a:7d:ad:97:35:a6
DEBU: 2019/01/27 14:28:24.902815 weave DELETE to http://127.0.0.1:6784/peer/0a:7d:ad:97:35:a6 with map[]
INFO: 2019/01/27 14:28:24.903761 [kube-peers] rmpeer of 0a:7d:ad:97:35:a6: 0 IPs taken over from 0a:7d:ad:97:35:a6

DEBU: 2019/01/27 14:28:24.912719 [kube-peers] Nodes that have disappeared: map[]
DEBU: 2019/01/27 14:28:24.914313 weave POST to http://127.0.0.1:6784/connect with map[replace:[true] peer:[10.35.104.5]]
INFO: 2019/01/27 14:28:24.914774 ->[10.35.104.5:6783] attempting connection
INFO: 2019/01/27 14:28:24.915052 ->[10.35.104.5:54735] connection accepted
INFO: 2019/01/27 14:28:24.915400 ->[10.35.104.5:54735|4a:ba:4c:fa:5c:b3(tracker)]: connection shutting down due to error: cannot connect to ourself
INFO: 2019/01/27 14:28:24.915541 ->[10.35.104.5:6783|4a:ba:4c:fa:5c:b3(tracker)]: connection shutting down due to error: cannot connect to ourself
WARN: 2019/01/27 14:28:53.528016 [allocator]: Delete: no addresses for a85583ba82ad6619c0fd0d3f8c1542d0149d15c7e4c8e6579f3e813320f7f77f
DEBU: 2019/01/27 14:32:15.994531 [kube-peers] Nodes that have disappeared: map[]
DEBU: 2019/01/27 14:32:15.996153 weave POST to http://127.0.0.1:6784/connect with map[replace:[true] peer:[10.35.104.5]]
INFO: 2019/01/27 14:32:15.997108 ->[10.35.104.5:6783] attempting connection
INFO: 2019/01/27 14:32:15.997412 ->[10.35.104.5:58673] connection accepted
INFO: 2019/01/27 14:32:15.997809 ->[10.35.104.5:58673|4a:ba:4c:fa:5c:b3(tracker)]: connection shutting down due to error: cannot connect to ourself
INFO: 2019/01/27 14:32:15.997999 ->[10.35.104.5:6783|4a:ba:4c:fa:5c:b3(tracker)]: connection shutting down due to error: cannot connect to ourself
WARN: 2019/01/27 14:33:01.467923 [allocator]: Delete: no addresses for f789776d56b98967fb96262f2a13644b8a02b4857ca6439ca7556e59ecf6e80b
DEBU: 2019/01/27 14:52:27.374508 [kube-peers] Nodes that have disappeared: map[]
DEBU: 2019/01/27 14:52:27.376546 weave POST to http://127.0.0.1:6784/connect with map[replace:[true] peer:[192.168.0.72 10.35.104.5]]
INFO: 2019/01/27 14:52:27.377385 ->[10.35.104.5:6783] attempting connection
INFO: 2019/01/27 14:52:27.377508 ->[192.168.0.72:6783] attempting connection
INFO: 2019/01/27 14:52:27.377769 ->[10.35.104.5:49492] connection accepted
INFO: 2019/01/27 14:52:27.378116 ->[10.35.104.5:49492|4a:ba:4c:fa:5c:b3(tracker)]: connection shutting down due to error: cannot connect to ourself
INFO: 2019/01/27 14:52:27.378277 ->[10.35.104.5:6783|4a:ba:4c:fa:5c:b3(tracker)]: connection shutting down due to error: cannot connect to ourself
INFO: 2019/01/27 15:04:41.196570 ->[192.168.0.72:6783] attempting connection
INFO: 2019/01/27 15:04:41.198302 ->[192.168.0.72:6783|6a:6c:c9:b5:18:8a(getter-test)]: connection ready; using protocol version 2
INFO: 2019/01/27 15:04:41.198400 overlay_switch ->[6a:6c:c9:b5:18:8a(getter-test)] using fastdp
INFO: 2019/01/27 15:04:41.198420 ->[192.168.0.72:6783|6a:6c:c9:b5:18:8a(getter-test)]: connection added (new peer)
INFO: 2019/01/27 15:04:41.199246 ->[192.168.0.72:6783|6a:6c:c9:b5:18:8a(getter-test)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [6a:6c:c9:b5:18:8a(getter-test)], ours: [4a:ba:4c:fa:5c:b3(tracker)])
INFO: 2019/01/27 15:04:41.199298 ->[192.168.0.72:6783|6a:6c:c9:b5:18:8a(getter-test)]: connection deleted
INFO: 2019/01/27 15:04:41.199311 Removed unreachable peer 6a:6c:c9:b5:18:8a(getter-test)
INFO: 2019/01/27 15:07:22.233026 ->[10.35.104.49:43017] connection accepted
INFO: 2019/01/27 15:07:22.233718 ->[10.35.104.49:43017|e2:fd:43:23:90:d2(worker-3)]: connection ready; using protocol version 2
INFO: 2019/01/27 15:07:22.233788 overlay_switch ->[e2:fd:43:23:90:d2(worker-3)] using fastdp
INFO: 2019/01/27 15:07:22.233809 ->[10.35.104.49:43017|e2:fd:43:23:90:d2(worker-3)]: connection added (new peer)
INFO: 2019/01/27 15:07:22.238742 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2019/01/27 15:07:22.238833 overlay_switch ->[e2:fd:43:23:90:d2(worker-3)] using sleeve
INFO: 2019/01/27 15:07:22.238856 ->[10.35.104.49:43017|e2:fd:43:23:90:d2(worker-3)]: connection fully established
INFO: 2019/01/27 15:07:22.239103 sleeve ->[10.35.104.49:6783|e2:fd:43:23:90:d2(worker-3)]: Effective MTU verified at 1438
INFO: 2019/01/27 15:07:22.239974 ->[10.35.104.49:43017|e2:fd:43:23:90:d2(worker-3)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [6a:6c:c9:b5:18:8a(getter-test)], ours: [4a:ba:4c:fa:5c:b3(tracker)])
INFO: 2019/01/27 15:07:22.240011 ->[10.35.104.49:43017|e2:fd:43:23:90:d2(worker-3)]: connection deleted
INFO: 2019/01/27 15:07:22.240023 Removed unreachable peer e2:fd:43:23:90:d2(worker-3)
INFO: 2019/01/27 15:07:22.240030 Removed unreachable peer 6a:6c:c9:b5:18:8a(getter-test)

Logs on node

[root@tracker initial]# kubectl  logs weave-net-fk586  weave -n kube-system                                                                                                                                                                                                  
DEBU: 2019/01/27 15:18:12.269751 [kube-peers] Checking peer "e2:fd:43:23:90:d2" against list &{[{4a:ba:4c:fa:5c:b3 tracker} {6a:6c:c9:b5:18:8a getter-test} {e2:fd:43:23:90:d2 worker-3}]}
INFO: 2019/01/27 15:18:12.434521 Command line options: map[conn-limit:100 datapath:datapath docker-api: host-root:/host metrics-addr:0.0.0.0:6782 db-prefix:/weavedb/weave-net ipalloc-init:consensus=3 ipalloc-range:10.32.0.0/16 name:e2:fd:43:23:90:d2 nickname:worker-3 h
ttp-addr:127.0.0.1:6784 port:6783 expect-npc:true no-dns:true]
INFO: 2019/01/27 15:18:12.434619 weave  2.5.1
INFO: 2019/01/27 15:18:12.436751 failed to create weave-test-comment49879c3c; disabling comment support
INFO: 2019/01/27 15:18:13.518172 Re-exposing 10.32.192.0/16 on bridge "weave"
INFO: 2019/01/27 15:18:13.532563 Bridge type is bridged_fastdp
INFO: 2019/01/27 15:18:13.532582 Communication between peers is unencrypted.
INFO: 2019/01/27 15:18:13.573180 Our name is e2:fd:43:23:90:d2(worker-3)
INFO: 2019/01/27 15:18:13.573258 Launch detected - using supplied peer list: [192.168.0.72 10.35.104.5 192.168.0.61]
INFO: 2019/01/27 15:18:13.585666 Checking for pre-existing addresses on weave bridge
INFO: 2019/01/27 15:18:13.586685 weave bridge has address 10.32.192.0/16
INFO: 2019/01/27 15:18:13.616532 Found address 10.32.192.1/16 for ID _
INFO: 2019/01/27 15:18:13.616731 Found address 10.32.192.1/16 for ID _
INFO: 2019/01/27 15:18:13.620063 [allocator e2:fd:43:23:90:d2] Initialising with persisted data
INFO: 2019/01/27 15:18:13.620179 Sniffing traffic on datapath (via ODP)
INFO: 2019/01/27 15:18:13.620594 ->[192.168.0.72:6783] attempting connection
INFO: 2019/01/27 15:18:13.620647 ->[10.35.104.5:6783] attempting connection
INFO: 2019/01/27 15:18:13.620936 ->[192.168.0.61:6783] attempting connection
INFO: 2019/01/27 15:18:13.621849 ->[192.168.0.61:47926] connection accepted
INFO: 2019/01/27 15:18:13.622414 ->[192.168.0.61:47926|e2:fd:43:23:90:d2(worker-3)]: connection shutting down due to error: cannot connect to ourself
INFO: 2019/01/27 15:18:13.622474 ->[192.168.0.61:6783|e2:fd:43:23:90:d2(worker-3)]: connection shutting down due to error: cannot connect to ourself
INFO: 2019/01/27 15:18:13.622974 ->[10.35.104.5:6783|4a:ba:4c:fa:5c:b3(tracker)]: connection ready; using protocol version 2
INFO: 2019/01/27 15:18:13.623239 overlay_switch ->[4a:ba:4c:fa:5c:b3(tracker)] using fastdp
INFO: 2019/01/27 15:18:13.623299 ->[10.35.104.5:6783|4a:ba:4c:fa:5c:b3(tracker)]: connection added (new peer)
INFO: 2019/01/27 15:18:13.623333 ->[192.168.0.72:6783|6a:6c:c9:b5:18:8a(getter-test)]: connection ready; using protocol version 2
INFO: 2019/01/27 15:18:13.623432 overlay_switch ->[6a:6c:c9:b5:18:8a(getter-test)] using fastdp
INFO: 2019/01/27 15:18:13.623462 ->[192.168.0.72:6783|6a:6c:c9:b5:18:8a(getter-test)]: connection added (new peer)
INFO: 2019/01/27 15:18:13.623532 Listening for HTTP control messages on 127.0.0.1:6784
INFO: 2019/01/27 15:18:13.623948 Listening for metrics requests on 0.0.0.0:6782
INFO: 2019/01/27 15:18:13.625970 ->[10.35.104.5:6783|4a:ba:4c:fa:5c:b3(tracker)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [4a:ba:4c:fa:5c:b3(tracker)], ours: [6a:6c:c9:b5:18:8a(getter-test)])
INFO: 2019/01/27 15:18:13.626070 ->[10.35.104.5:6783|4a:ba:4c:fa:5c:b3(tracker)]: connection deleted
INFO: 2019/01/27 15:18:13.626113 Removed unreachable peer 4a:ba:4c:fa:5c:b3(tracker)
INFO: 2019/01/27 15:18:13.626196 ->[192.168.0.72:6783|6a:6c:c9:b5:18:8a(getter-test)]: connection fully established
INFO: 2019/01/27 15:18:13.626381 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2019/01/27 15:18:13.626727 sleeve ->[192.168.0.72:6783|6a:6c:c9:b5:18:8a(getter-test)]: Effective MTU verified at 1438
INFO: 2019/01/27 15:18:14.346870 [kube-peers] Added myself to peer list &{[{4a:ba:4c:fa:5c:b3 tracker} {6a:6c:c9:b5:18:8a getter-test} {e2:fd:43:23:90:d2 worker-3}]}
DEBU: 2019/01/27 15:18:14.356474 [kube-peers] Nodes that have disappeared: map[]
10.32.192.0
192.168.0.72
10.35.104.5
192.168.0.61
DEBU: 2019/01/27 15:18:14.438237 registering for updates for node delete events
INFO: 2019/01/27 15:18:14.945563 ->[10.35.104.5:6783] attempting connection
INFO: 2019/01/27 15:18:14.947592 ->[10.35.104.5:6783|4a:ba:4c:fa:5c:b3(tracker)]: connection ready; using protocol version 2
INFO: 2019/01/27 15:18:14.947767 overlay_switch ->[4a:ba:4c:fa:5c:b3(tracker)] using fastdp
INFO: 2019/01/27 15:18:14.947796 ->[10.35.104.5:6783|4a:ba:4c:fa:5c:b3(tracker)]: connection added (new peer)
INFO: 2019/01/27 15:18:14.948522 ->[10.35.104.5:6783|4a:ba:4c:fa:5c:b3(tracker)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [4a:ba:4c:fa:5c:b3(tracker)], ours: [6a:6c:c9:b5:18:8a(getter-test)])
INFO: 2019/01/27 15:18:14.948578 ->[10.35.104.5:6783|4a:ba:4c:fa:5c:b3(tracker)]: connection deleted
INFO: 2019/01/27 15:18:14.948610 Removed unreachable peer 4a:ba:4c:fa:5c:b3(tracker)

Network:

$ ip route
on node weave pod
/home/weave # ip route
default via 192.168.0.1 dev eth0 
10.32.0.0/16 dev weave proto kernel scope link src 10.32.192.0 
10.35.104.0/24 dev eth1 proto kernel scope link src 10.35.104.49 
10.35.105.0/24 dev eth2 proto kernel scope link src 10.35.105.63 
10.35.110.0/24 dev eth3 proto kernel scope link src 10.35.110.62 
169.254.169.254 via 192.168.0.1 dev eth0 proto static 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.61 
192.169.2.0/24 via 10.35.110.52 dev eth3 proto bird 
blackhole 192.169.3.0/24 proto bird 

on master weave pod
/home/weave # ip route
default via 192.168.0.1 dev eth0 
10.32.0.0/16 dev weave proto kernel scope link src 10.32.0.1 
10.35.104.0/24 dev eth1 proto kernel scope link src 10.35.104.5 
10.35.105.0/24 dev eth2 proto kernel scope link src 10.35.105.10 
10.35.110.0/24 dev eth3 proto kernel scope link src 10.35.110.13 
169.254.169.254 via 192.168.0.1 dev eth0 proto static 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.15 
blackhole 192.169.45.0/26 proto bird 

$ ip -4 -o addr
on master
[root@tracker initial]# ip -4 -o addr
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
2: eth0    inet 192.168.0.15/24 brd 192.168.0.255 scope global dynamic eth0\       valid_lft 74587sec preferred_lft 74587sec
3: eth1    inet 10.35.104.5/24 brd 10.35.104.255 scope global dynamic eth1\       valid_lft 62638sec preferred_lft 62638sec
4: eth2    inet 10.35.105.10/24 brd 10.35.105.255 scope global dynamic eth2\       valid_lft 64042sec preferred_lft 64042sec
5: eth3    inet 10.35.110.13/24 brd 10.35.110.255 scope global dynamic eth3\       valid_lft 58953sec preferred_lft 58953sec
6: docker0    inet 172.17.0.1/16 scope global docker0\       valid_lft forever preferred_lft forever
2507: weave    inet 10.32.0.1/16 brd 10.32.255.255 scope global weave\       valid_lft forever preferred_lft forever

on node
[root@worker-3 var]# ip -4 -o addr
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
2: eth0    inet 192.168.0.61/24 brd 192.168.0.255 scope global dynamic eth0\       valid_lft 69651sec preferred_lft 69651sec
3: eth1    inet 10.35.104.49/24 brd 10.35.104.255 scope global dynamic eth1\       valid_lft 61258sec preferred_lft 61258sec
4: eth2    inet 10.35.105.63/24 brd 10.35.105.255 scope global dynamic eth2\       valid_lft 85000sec preferred_lft 85000sec
5: eth3    inet 10.35.110.62/24 brd 10.35.110.255 scope global dynamic eth3\       valid_lft 80731sec preferred_lft 80731sec
6: docker0    inet 172.17.0.1/16 scope global docker0\       valid_lft forever preferred_lft forever
44: weave    inet 10.32.192.0/16 brd 10.32.255.255 scope global weave\       valid_lft forever preferred_lft forever

$ sudo iptables-save
on master
# Generated by iptables-save v1.6.1 on Sun Jan 27 17:28:06 2019
*mangle
:PREROUTING ACCEPT [2657597:160770877]
:INPUT ACCEPT [35769905:7780818112]
:FORWARD ACCEPT [20173:1354787]
:OUTPUT ACCEPT [28073230:9763926397]
:POSTROUTING ACCEPT [28093177:9765266501]
:cali-PREROUTING - [0:0]
:cali-failsafe-in - [0:0]
:cali-from-host-endpoint - [0:0]
-A PREROUTING -m comment --comment "cali:6gwbT8clXdHdC1b1" -j cali-PREROUTING
-A cali-PREROUTING -m comment --comment "cali:6BJqBjBC7crtA-7-" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-PREROUTING -m comment --comment "cali:KX7AGNd6rMcDUai6" -m mark --mark 0x10000/0x10000 -j ACCEPT
-A cali-PREROUTING -m comment --comment "cali:wNH7KsA3ILKJBsY9" -j cali-from-host-endpoint
-A cali-PREROUTING -m comment --comment "cali:Cg96MgVuoPm7UMRo" -m comment --comment "Host endpoint policy accepted packet." -m mark --mark 0x10000/0x10000 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:wWFQM43tJU7wwnFZ" -m multiport --dports 22 -j ACCEPT
-A cali-failsafe-in -p udp -m comment --comment "cali:LwNV--R8MjeUYacw" -m multiport --dports 68 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:QOO5NUOqOSS1_Iw0" -m multiport --dports 179 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:cwZWoBSwVeIAZmVN" -m multiport --dports 2379 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:7FbNXT91kugE_upR" -m multiport --dports 2380 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:ywE9WYUBEpve70WT" -m multiport --dports 6666 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:l-WQSVBf_lygPR0J" -m multiport --dports 6667 -j ACCEPT
COMMIT
# Completed on Sun Jan 27 17:28:06 2019
# Generated by iptables-save v1.6.1 on Sun Jan 27 17:28:06 2019
*raw
:PREROUTING ACCEPT [35790095:7782174533]
:OUTPUT ACCEPT [28073232:9763926501]
:cali-OUTPUT - [0:0]
:cali-PREROUTING - [0:0]
:cali-failsafe-in - [0:0]
:cali-failsafe-out - [0:0]
:cali-from-host-endpoint - [0:0]
:cali-to-host-endpoint - [0:0]
-A PREROUTING -m comment --comment "cali:6gwbT8clXdHdC1b1" -j cali-PREROUTING
-A OUTPUT -m comment --comment "cali:tVnHkvAo15HuiPy0" -j cali-OUTPUT
-A cali-OUTPUT -m comment --comment "cali:njdnLwYeGqBJyMxW" -j MARK --set-xmark 0x0/0xf0000
-A cali-OUTPUT -m comment --comment "cali:rz86uTUcEZAfFsh7" -j cali-to-host-endpoint
-A cali-OUTPUT -m comment --comment "cali:pN0F5zD0b8yf9W1Z" -m mark --mark 0x10000/0x10000 -j ACCEPT
-A cali-PREROUTING -m comment --comment "cali:XFX5xbM8B9qR10JG" -j MARK --set-xmark 0x0/0xf0000
-A cali-PREROUTING -i cali+ -m comment --comment "cali:EWMPb0zVROM-woQp" -j MARK --set-xmark 0x40000/0x40000
-A cali-PREROUTING -m comment --comment "cali:Ek_rsNpunyDlK3sH" -m mark --mark 0x0/0x40000 -j cali-from-host-endpoint
-A cali-PREROUTING -m comment --comment "cali:nM-DzTFPwQbQvtRj" -m mark --mark 0x10000/0x10000 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:wWFQM43tJU7wwnFZ" -m multiport --dports 22 -j ACCEPT
-A cali-failsafe-in -p udp -m comment --comment "cali:LwNV--R8MjeUYacw" -m multiport --dports 68 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:QOO5NUOqOSS1_Iw0" -m multiport --dports 179 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:cwZWoBSwVeIAZmVN" -m multiport --dports 2379 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:7FbNXT91kugE_upR" -m multiport --dports 2380 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:ywE9WYUBEpve70WT" -m multiport --dports 6666 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:l-WQSVBf_lygPR0J" -m multiport --dports 6667 -j ACCEPT
-A cali-failsafe-in -p udp -m comment --comment "cali:k9jPBsnz833bYNtN" -m multiport --sports 53 -j ACCEPT
-A cali-failsafe-in -p udp -m comment --comment "cali:h6bDkHXiHjFdQFvi" -m multiport --sports 67 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:ZxyjJQRmKuKXDHob" -m multiport --sports 179 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:simwjHaxrPmaHOEO" -m multiport --sports 2379 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:hvk-Re2iN6cMDIO-" -m multiport --sports 2380 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:czejYL2nB2RLhrhj" -m multiport --sports 6666 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:Poam7ro8PATnz_3V" -m multiport --sports 6667 -j ACCEPT
-A cali-failsafe-out -p udp -m comment --comment "cali:82hjfji-wChFhAqL" -m multiport --dports 53 -j ACCEPT
-A cali-failsafe-out -p udp -m comment --comment "cali:TNM3RfEjbNr72hgH" -m multiport --dports 67 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:ycxKitIl4u3dK0HR" -m multiport --dports 179 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:hxjEWyxdkXXkdvut" -m multiport --dports 2379 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:cA_GLtruuvG88KiO" -m multiport --dports 2380 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:Sb1hkLYFMrKS6r01" -m multiport --dports 6666 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:UwLSebGONJUG4yG-" -m multiport --dports 6667 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:r23CvAiW0ROtMTyk" -m multiport --sports 22 -j ACCEPT
-A cali-failsafe-out -p udp -m comment --comment "cali:D9jU-Lf4ZjKkTtdD" -m multiport --sports 68 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:5zDpOHUwMrjzLzZl" -m multiport --sports 179 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:Jq44rynzFYoWGr4q" -m multiport --sports 2379 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:OiGBCpR5GP0HW_y6" -m multiport --sports 2380 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:iwXWeITN771fTZ2N" -m multiport --sports 6666 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:Ot9A94gzys2kTtDj" -m multiport --sports 6667 -j ACCEPT
COMMIT
# Completed on Sun Jan 27 17:28:06 2019
# Generated by iptables-save v1.6.1 on Sun Jan 27 17:28:06 2019
*nat
:PREROUTING ACCEPT [78:4680]
:INPUT ACCEPT [78:4680]
:OUTPUT ACCEPT [13:864]
:POSTROUTING ACCEPT [13:864]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-3DU66DE6VORVEQVD - [0:0]
:KUBE-SEP-ES652KEXJGQ2TLCD - [0:0]
:KUBE-SEP-JSIVJYPPJ7NNPWD6 - [0:0]
:KUBE-SEP-OWSQUZWWYSICGE4Z - [0:0]
:KUBE-SEP-PXATIYR3LRU2YWAO - [0:0]
:KUBE-SEP-RXSSON45FFCILO4Z - [0:0]
:KUBE-SEP-S4MK5EVI7CLHCCS6 - [0:0]
:KUBE-SEP-ZT5TVM6PMFDFQAMO - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-I7TGNIU6JERYTRFQ - [0:0]
:KUBE-SVC-K7J76NXP7AUZVFGS - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-XGLOHA7QRQ3V22RZ - [0:0]
:WEAVE - [0:0]
:cali-OUTPUT - [0:0]
:cali-POSTROUTING - [0:0]
:cali-PREROUTING - [0:0]
:cali-fip-dnat - [0:0]
:cali-fip-snat - [0:0]
:cali-nat-outgoing - [0:0]
-A PREROUTING -m comment --comment "cali:6gwbT8clXdHdC1b1" -j cali-PREROUTING
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "cali:tVnHkvAo15HuiPy0" -j cali-OUTPUT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -m comment --comment "cali:O3lYWMrLQYEMJtB5" -j cali-POSTROUTING
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING ! -s 10.244.0.0/16 -d 192.169.0.0/24 -j RETURN
-A POSTROUTING -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
-A POSTROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/24 -j RETURN
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
-A POSTROUTING -s 10.244.0.0/16 ! -d 10.244.0.0/16 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 5000 -j MASQUERADE
-A POSTROUTING -j WEAVE
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 5000 -j DNAT --to-destination 172.17.0.2:5000
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-3DU66DE6VORVEQVD -s 10.32.0.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-3DU66DE6VORVEQVD -p udp -m udp -j DNAT --to-destination 10.32.0.3:53
-A KUBE-SEP-ES652KEXJGQ2TLCD -s 10.32.0.4/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-ES652KEXJGQ2TLCD -p udp -m udp -j DNAT --to-destination 10.32.0.4:53
-A KUBE-SEP-JSIVJYPPJ7NNPWD6 -s 192.168.0.15/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-JSIVJYPPJ7NNPWD6 -p tcp -m tcp -j DNAT --to-destination 192.168.0.15:6443
-A KUBE-SEP-OWSQUZWWYSICGE4Z -s 10.32.0.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-OWSQUZWWYSICGE4Z -p tcp -m tcp -j DNAT --to-destination 10.32.0.3:6379
-A KUBE-SEP-PXATIYR3LRU2YWAO -s 10.32.0.2/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-PXATIYR3LRU2YWAO -p tcp -m tcp -j DNAT --to-destination 10.32.0.2:44134
-A KUBE-SEP-RXSSON45FFCILO4Z -s 10.32.0.2/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-RXSSON45FFCILO4Z -p tcp -m tcp -j DNAT --to-destination 10.32.0.2:8443
-A KUBE-SEP-S4MK5EVI7CLHCCS6 -s 10.32.0.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-S4MK5EVI7CLHCCS6 -p tcp -m tcp -j DNAT --to-destination 10.32.0.3:53
-A KUBE-SEP-ZT5TVM6PMFDFQAMO -s 10.32.0.4/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-ZT5TVM6PMFDFQAMO -p tcp -m tcp -j DNAT --to-destination 10.32.0.4:53
-A KUBE-SERVICES ! -s 10.32.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.32.0.0/16 -d 10.106.107.167/32 -p tcp -m comment --comment "kube-system/kubernetes-dashboard: cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.106.107.167/32 -p tcp -m comment --comment "kube-system/kubernetes-dashboard: cluster IP" -m tcp --dport 443 -j KUBE-SVC-XGLOHA7QRQ3V22RZ
-A KUBE-SERVICES ! -s 10.32.0.0/16 -d 10.96.253.48/32 -p tcp -m comment --comment "kube-system/tiller-deploy:tiller cluster IP" -m tcp --dport 44134 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.253.48/32 -p tcp -m comment --comment "kube-system/tiller-deploy:tiller cluster IP" -m tcp --dport 44134 -j KUBE-SVC-K7J76NXP7AUZVFGS
-A KUBE-SERVICES ! -s 10.32.0.0/16 -d 10.108.180.242/32 -p tcp -m comment --comment "default/redis-master:redis cluster IP" -m tcp --dport 6379 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.108.180.242/32 -p tcp -m comment --comment "default/redis-master:redis cluster IP" -m tcp --dport 6379 -j KUBE-SVC-I7TGNIU6JERYTRFQ
-A KUBE-SERVICES ! -s 10.32.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 10.32.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-S4MK5EVI7CLHCCS6
-A KUBE-SVC-ERIFXISQEP7F7OF4 -j KUBE-SEP-ZT5TVM6PMFDFQAMO
-A KUBE-SVC-I7TGNIU6JERYTRFQ -j KUBE-SEP-OWSQUZWWYSICGE4Z
-A KUBE-SVC-K7J76NXP7AUZVFGS -j KUBE-SEP-PXATIYR3LRU2YWAO
-A KUBE-SVC-NPX46M4PTMTKRN6Y -j KUBE-SEP-JSIVJYPPJ7NNPWD6
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-3DU66DE6VORVEQVD
-A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-ES652KEXJGQ2TLCD
-A KUBE-SVC-XGLOHA7QRQ3V22RZ -j KUBE-SEP-RXSSON45FFCILO4Z
-A WEAVE -s 10.32.0.0/16 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 10.32.0.0/16 -d 10.32.0.0/16 -j MASQUERADE
-A WEAVE -s 10.32.0.0/16 ! -d 10.32.0.0/16 -j MASQUERADE
-A cali-OUTPUT -m comment --comment "cali:GBTAv2p5CwevEyJm" -j cali-fip-dnat
-A cali-POSTROUTING -m comment --comment "cali:Z-c7XtVd2Bq7s_hA" -j cali-fip-snat
-A cali-POSTROUTING -m comment --comment "cali:nYKhEzDlr11Jccal" -j cali-nat-outgoing
-A cali-POSTROUTING -o tunl0 -m comment --comment "cali:JHlpT-eSqR1TvyYm" -m addrtype ! --src-type LOCAL --limit-iface-out -m addrtype --src-type LOCAL -j MASQUERADE
-A cali-PREROUTING -m comment --comment "cali:r6XmIziWUJsdOK6Z" -j cali-fip-dnat
-A cali-nat-outgoing -m comment --comment "cali:Dw4T8UWPnCLxRJiI" -m set --match-set cali40masq-ipam-pools src -m set ! --match-set cali40all-ipam-pools dst -j MASQUERADE
COMMIT
# Completed on Sun Jan 27 17:28:06 2019
# Generated by iptables-save v1.6.1 on Sun Jan 27 17:28:06 2019
*filter
:INPUT ACCEPT [2072:607047]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1883:713341]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-SERVICES - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-EGRESS - [0:0]
:WEAVE-NPC-EGRESS-ACCEPT - [0:0]
:WEAVE-NPC-EGRESS-CUSTOM - [0:0]
:WEAVE-NPC-EGRESS-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
:cali-FORWARD - [0:0]
:cali-INPUT - [0:0]
:cali-OUTPUT - [0:0]
:cali-failsafe-in - [0:0]
:cali-failsafe-out - [0:0]
:cali-from-hep-forward - [0:0]
:cali-from-host-endpoint - [0:0]
:cali-from-wl-dispatch - [0:0]
:cali-fw-cali2601a85483a - [0:0]
:cali-fw-calie1bfd47f91b - [0:0]
:cali-fw-calif6ca160a765 - [0:0]
:cali-pri-_be-1GnaHI4zA9ZiNqb - [0:0]
:cali-pri-kns.default - [0:0]
:cali-pri-kns.kube-system - [0:0]
:cali-pri-ksa.default.default - [0:0]
:cali-pro-_be-1GnaHI4zA9ZiNqb - [0:0]
:cali-pro-kns.default - [0:0]
:cali-pro-kns.kube-system - [0:0]
:cali-pro-ksa.default.default - [0:0]
:cali-to-hep-forward - [0:0]
:cali-to-host-endpoint - [0:0]
:cali-to-wl-dispatch - [0:0]
:cali-tw-cali2601a85483a - [0:0]
:cali-tw-calie1bfd47f91b - [0:0]
:cali-tw-calif6ca160a765 - [0:0]
:cali-wl-to-host - [0:0]
-A INPUT -m comment --comment "cali:Cz_u1IQiXIMmKD4c" -j cali-INPUT
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A INPUT -i weave -j WEAVE-NPC-EGRESS
-A FORWARD -i weave -m comment --comment "NOTE: this must go before \'-j KUBE-FORWARD\'" -j WEAVE-NPC-EGRESS
-A FORWARD -o weave -m comment --comment "NOTE: this must go before \'-j KUBE-FORWARD\'" -j WEAVE-NPC
-A FORWARD -o weave -m state --state NEW -j NFLOG --nflog-group 86
-A FORWARD -o weave -j DROP
-A FORWARD -i weave ! -o weave -j ACCEPT
-A FORWARD -o weave -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o cni0 -m comment --comment "flannel subnet" -j ACCEPT
-A FORWARD -i cni0 -m comment --comment "flannel subnet" -j ACCEPT
-A FORWARD -m comment --comment "cali:wUHhoiAYhphO9Mso" -j cali-FORWARD
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -s 10.244.0.0/16 -j ACCEPT
-A FORWARD -d 10.244.0.0/16 -j ACCEPT
-A OUTPUT -m comment --comment "cali:tVnHkvAo15HuiPy0" -j cali-OUTPUT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 5000 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 10.32.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 10.32.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-SERVICES -d 10.106.228.218/32 -p tcp -m comment --comment "default/redis-slave:redis has no endpoints" -m tcp --dport 6379 -j REJECT --reject-with icmp-port-unreachable
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m physdev --physdev-out vethwe-bridge --physdev-is-bridged -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC-DEFAULT -m set --match-set weave-;rGqyMIl1HN^cfDki~Z$3]6!N dst -m comment --comment "DefaultAllow ingress isolation for namespace: default" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-P.B|!ZhkAr5q=XZ?3}tMBA+0 dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-system" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-Rzff}h:=]JaaJl/G;(XJpGjZ[ dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-public" -j ACCEPT
-A WEAVE-NPC-EGRESS -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC-EGRESS -m physdev --physdev-in vethwe-bridge --physdev-is-bridged -j RETURN
-A WEAVE-NPC-EGRESS -m addrtype --dst-type LOCAL -j RETURN
-A WEAVE-NPC-EGRESS -d 224.0.0.0/4 -j RETURN
-A WEAVE-NPC-EGRESS -m state --state NEW -j WEAVE-NPC-EGRESS-DEFAULT
-A WEAVE-NPC-EGRESS -m state --state NEW -m mark ! --mark 0x40000/0x40000 -j WEAVE-NPC-EGRESS-CUSTOM
-A WEAVE-NPC-EGRESS -m state --state NEW -m mark ! --mark 0x40000/0x40000 -j NFLOG --nflog-group 86
-A WEAVE-NPC-EGRESS -m mark ! --mark 0x40000/0x40000 -j DROP
-A WEAVE-NPC-EGRESS-ACCEPT -j MARK --set-xmark 0x40000/0x40000
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src -m comment --comment "DefaultAllow egress isolation for namespace: kube-system" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src -m comment --comment "DefaultAllow egress isolation for namespace: kube-system" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src -m comment --comment "DefaultAllow egress isolation for namespace: kube-public" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src -m comment --comment "DefaultAllow egress isolation for namespace: kube-public" -j RETURN
-A cali-FORWARD -m comment --comment "cali:vjrMJCRpqwy5oRoX" -j MARK --set-xmark 0x0/0xe0000
-A cali-FORWARD -m comment --comment "cali:A_sPAO0mcxbT9mOV" -m mark --mark 0x0/0x10000 -j cali-from-hep-forward
-A cali-FORWARD -i cali+ -m comment --comment "cali:8ZoYfO5HKXWbB3pk" -j cali-from-wl-dispatch
-A cali-FORWARD -o cali+ -m comment --comment "cali:jdEuaPBe14V2hutn" -j cali-to-wl-dispatch
-A cali-FORWARD -m comment --comment "cali:12bc6HljsMKsmfr-" -j cali-to-hep-forward
-A cali-FORWARD -m comment --comment "cali:MH9kMp5aNICL-Olv" -m comment --comment "Policy explicitly accepted packet." -m mark --mark 0x10000/0x10000 -j ACCEPT
-A cali-INPUT -p ipencap -m comment --comment "cali:PajejrV4aFdkZojI" -m comment --comment "Allow IPIP packets from Calico hosts" -m set --match-set cali40all-hosts-net src -m addrtype --dst-type LOCAL -j ACCEPT
-A cali-INPUT -p ipencap -m comment --comment "cali:_wjq-Yrma8Ly1Svo" -m comment --comment "Drop IPIP packets from non-Calico hosts" -j DROP
-A cali-INPUT -i cali+ -m comment --comment "cali:8TZGxLWh_Eiz66wc" -g cali-wl-to-host
-A cali-INPUT -m comment --comment "cali:6McIeIDvPdL6PE1T" -m mark --mark 0x10000/0x10000 -j ACCEPT
-A cali-INPUT -m comment --comment "cali:YGPbrUms7NId8xVa" -j MARK --set-xmark 0x0/0xf0000
-A cali-INPUT -m comment --comment "cali:2gmY7Bg2i0i84Wk_" -j cali-from-host-endpoint
-A cali-INPUT -m comment --comment "cali:q-Vz2ZT9iGE331LL" -m comment --comment "Host endpoint policy accepted packet." -m mark --mark 0x10000/0x10000 -j ACCEPT
-A cali-OUTPUT -m comment --comment "cali:Mq1_rAdXXH3YkrzW" -m mark --mark 0x10000/0x10000 -j ACCEPT
-A cali-OUTPUT -o cali+ -m comment --comment "cali:69FkRTJDvD5Vu6Vl" -j RETURN
-A cali-OUTPUT -p ipencap -m comment --comment "cali:AnEsmO6bDZbQntWW" -m comment --comment "Allow IPIP packets to other Calico hosts" -m set --match-set cali40all-hosts-net dst -m addrtype --src-type LOCAL -j ACCEPT
-A cali-OUTPUT -m comment --comment "cali:9e9Uf3GU5tX--Lxy" -j MARK --set-xmark 0x0/0xf0000
-A cali-OUTPUT -m comment --comment "cali:OB2pzPrvQn6PC89t" -j cali-to-host-endpoint
-A cali-OUTPUT -m comment --comment "cali:tvSSMDBWrme3CUqM" -m comment --comment "Host endpoint policy accepted packet." -m mark --mark 0x10000/0x10000 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:wWFQM43tJU7wwnFZ" -m multiport --dports 22 -j ACCEPT
-A cali-failsafe-in -p udp -m comment --comment "cali:LwNV--R8MjeUYacw" -m multiport --dports 68 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:QOO5NUOqOSS1_Iw0" -m multiport --dports 179 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:cwZWoBSwVeIAZmVN" -m multiport --dports 2379 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:7FbNXT91kugE_upR" -m multiport --dports 2380 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:ywE9WYUBEpve70WT" -m multiport --dports 6666 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:l-WQSVBf_lygPR0J" -m multiport --dports 6667 -j ACCEPT
-A cali-failsafe-out -p udp -m comment --comment "cali:82hjfji-wChFhAqL" -m multiport --dports 53 -j ACCEPT
-A cali-failsafe-out -p udp -m comment --comment "cali:TNM3RfEjbNr72hgH" -m multiport --dports 67 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:ycxKitIl4u3dK0HR" -m multiport --dports 179 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:hxjEWyxdkXXkdvut" -m multiport --dports 2379 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:cA_GLtruuvG88KiO" -m multiport --dports 2380 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:Sb1hkLYFMrKS6r01" -m multiport --dports 6666 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:UwLSebGONJUG4yG-" -m multiport --dports 6667 -j ACCEPT
-A cali-from-wl-dispatch -i cali2601a85483a -m comment --comment "cali:RoLqISWmAuu6OD8G" -g cali-fw-cali2601a85483a
-A cali-from-wl-dispatch -i calie1bfd47f91b -m comment --comment "cali:ShNCdNQm4XWqrN4F" -g cali-fw-calie1bfd47f91b
-A cali-from-wl-dispatch -i calif6ca160a765 -m comment --comment "cali:pPN6Y-r4TJNJt44Q" -g cali-fw-calif6ca160a765
-A cali-from-wl-dispatch -m comment --comment "cali:SgPjjMB9Y3cIi-tx" -m comment --comment "Unknown interface" -j DROP
-A cali-fw-cali2601a85483a -m comment --comment "cali:zK2PbOkQc3fa3Tml" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-fw-cali2601a85483a -m comment --comment "cali:jp3UOXo68Z3YjEob" -m conntrack --ctstate INVALID -j DROP
-A cali-fw-cali2601a85483a -m comment --comment "cali:L8ImgW8iWT2vygin" -j MARK --set-xmark 0x0/0x10000
-A cali-fw-cali2601a85483a -m comment --comment "cali:F-QDmR5HpiAF9gr3" -j cali-pro-kns.kube-system
-A cali-fw-cali2601a85483a -m comment --comment "cali:HVkcQk6tIw2-zBUc" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-fw-cali2601a85483a -m comment --comment "cali:WHICXtHXzI2BBxgO" -j cali-pro-_be-1GnaHI4zA9ZiNqb
-A cali-fw-cali2601a85483a -m comment --comment "cali:UKqUgswndn9EVIyh" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-fw-cali2601a85483a -m comment --comment "cali:nJvVpccwOt9kZPO_" -m comment --comment "Drop if no profiles matched" -j DROP
-A cali-fw-calie1bfd47f91b -m comment --comment "cali:7zB82qtSVsdPoAlY" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-fw-calie1bfd47f91b -m comment --comment "cali:lDMxG2xTsIvmabyi" -m conntrack --ctstate INVALID -j DROP
-A cali-fw-calie1bfd47f91b -m comment --comment "cali:Au3o7dH0bcuiGxqs" -j MARK --set-xmark 0x0/0x10000
-A cali-fw-calie1bfd47f91b -m comment --comment "cali:GQhZpfuD9KjO8un5" -j cali-pro-kns.default
-A cali-fw-calie1bfd47f91b -m comment --comment "cali:zBAPjzjuIt1cU4oR" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-fw-calie1bfd47f91b -m comment --comment "cali:dA-jU5g6tQxVBTJo" -j cali-pro-ksa.default.default
-A cali-fw-calie1bfd47f91b -m comment --comment "cali:jgG-68kTo7lUAN06" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-fw-calie1bfd47f91b -m comment --comment "cali:6FH_xC-Gui7JDnTx" -m comment --comment "Drop if no profiles matched" -j DROP
-A cali-fw-calif6ca160a765 -m comment --comment "cali:s0JJhPcyHAaos2CR" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-fw-calif6ca160a765 -m comment --comment "cali:7ME2aezTlw0Bdrkt" -m conntrack --ctstate INVALID -j DROP
-A cali-fw-calif6ca160a765 -m comment --comment "cali:klBHiy4Z2O_Te42M" -j MARK --set-xmark 0x0/0x10000
-A cali-fw-calif6ca160a765 -m comment --comment "cali:5BtL1_avQAhd6aVB" -j cali-pro-kns.kube-system
-A cali-fw-calif6ca160a765 -m comment --comment "cali:XV1UPwiJm_sUUhus" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-fw-calif6ca160a765 -m comment --comment "cali:rVkcruiRu12tLUrZ" -j cali-pro-_be-1GnaHI4zA9ZiNqb
-A cali-fw-calif6ca160a765 -m comment --comment "cali:2cbmnTUSTJMKH7Rj" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-fw-calif6ca160a765 -m comment --comment "cali:GgaVyuWynIWAh7Mh" -m comment --comment "Drop if no profiles matched" -j DROP
-A cali-pri-kns.default -m comment --comment "cali:7Fnh7Pv3_98FtLW7" -j MARK --set-xmark 0x10000/0x10000
-A cali-pri-kns.default -m comment --comment "cali:ZbV6bJXWSRefjK0u" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-pri-kns.kube-system -m comment --comment "cali:zoH5gU6U55FKZxEo" -j MARK --set-xmark 0x10000/0x10000
-A cali-pri-kns.kube-system -m comment --comment "cali:bcGRIJcyOS9dgBiB" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-pro-kns.default -m comment --comment "cali:oLzzje5WExbgfib5" -j MARK --set-xmark 0x10000/0x10000
-A cali-pro-kns.default -m comment --comment "cali:4goskqvxh5xcGw3s" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-pro-kns.kube-system -m comment --comment "cali:-50oJuMfLVO3LkBk" -j MARK --set-xmark 0x10000/0x10000
-A cali-pro-kns.kube-system -m comment --comment "cali:ztVPKv1UYejNzm1g" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-to-wl-dispatch -o cali2601a85483a -m comment --comment "cali:AQquOwA01wAmlygU" -g cali-tw-cali2601a85483a
-A cali-to-wl-dispatch -o calie1bfd47f91b -m comment --comment "cali:iHo7x9DNR3Ll2P3d" -g cali-tw-calie1bfd47f91b
-A cali-to-wl-dispatch -o calif6ca160a765 -m comment --comment "cali:4eFO2yOv_U0xh61z" -g cali-tw-calif6ca160a765
-A cali-to-wl-dispatch -m comment --comment "cali:Cnmt_0VOQgBhErqe" -m comment --comment "Unknown interface" -j DROP
-A cali-tw-cali2601a85483a -m comment --comment "cali:OigkHCZhQaAvPbKY" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-tw-cali2601a85483a -m comment --comment "cali:SjCZMe68el2I6iCL" -m conntrack --ctstate INVALID -j DROP
-A cali-tw-cali2601a85483a -m comment --comment "cali:JqU1miEktCzreeKr" -j MARK --set-xmark 0x0/0x10000
-A cali-tw-cali2601a85483a -m comment --comment "cali:P6FoPPA_4luWYJJM" -j cali-pri-kns.kube-system
-A cali-tw-cali2601a85483a -m comment --comment "cali:AWDJWZ9aoblinpfD" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-tw-cali2601a85483a -m comment --comment "cali:BAHtfMHRnAl81zoA" -j cali-pri-_be-1GnaHI4zA9ZiNqb
-A cali-tw-cali2601a85483a -m comment --comment "cali:Tdflj6Sdg3CXilDc" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-tw-cali2601a85483a -m comment --comment "cali:BB8x6aV0VbRVhMVD" -m comment --comment "Drop if no profiles matched" -j DROP
-A cali-tw-calie1bfd47f91b -m comment --comment "cali:6-fPlQcyG1TU8Mvk" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-tw-calie1bfd47f91b -m comment --comment "cali:ZPAIuk-ozZRqVVyx" -m conntrack --ctstate INVALID -j DROP
-A cali-tw-calie1bfd47f91b -m comment --comment "cali:KhrRyXS9jRY12sk-" -j MARK --set-xmark 0x0/0x10000
-A cali-tw-calie1bfd47f91b -m comment --comment "cali:VV9Ag8Gb-Fwko7Cj" -j cali-pri-kns.default
-A cali-tw-calie1bfd47f91b -m comment --comment "cali:Os_7ah76hIrUVvUw" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-tw-calie1bfd47f91b -m comment --comment "cali:c0LN10A22AuzqoGg" -j cali-pri-ksa.default.default
-A cali-tw-calie1bfd47f91b -m comment --comment "cali:XIKZkbjypgU5iL-4" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-tw-calie1bfd47f91b -m comment --comment "cali:2Xq1T_88stcwKzEe" -m comment --comment "Drop if no profiles matched" -j DROP
-A cali-tw-calif6ca160a765 -m comment --comment "cali:Nt9Vj7D24RlfR_LT" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-tw-calif6ca160a765 -m comment --comment "cali:3N8I-WBCtpdqx-pd" -m conntrack --ctstate INVALID -j DROP
-A cali-tw-calif6ca160a765 -m comment --comment "cali:vFjnFhwsQ7oROuj2" -j MARK --set-xmark 0x0/0x10000
-A cali-tw-calif6ca160a765 -m comment --comment "cali:6Ow8i5qEHo31MYx-" -j cali-pri-kns.kube-system
-A cali-tw-calif6ca160a765 -m comment --comment "cali:QrHN_-egZvddPKm4" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-tw-calif6ca160a765 -m comment --comment "cali:VHHEdsvSWYuh3bIf" -j cali-pri-_be-1GnaHI4zA9ZiNqb
-A cali-tw-calif6ca160a765 -m comment --comment "cali:fAZLhxxfHhxs2-ha" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-tw-calif6ca160a765 -m comment --comment "cali:Qcmcl6xxProzHDLu" -m comment --comment "Drop if no profiles matched" -j DROP
-A cali-wl-to-host -m comment --comment "cali:Ee9Sbo10IpVujdIY" -j cali-from-wl-dispatch
-A cali-wl-to-host -m comment --comment "cali:nSZbcOoG1xPONxb8" -m comment --comment "Configured DefaultEndpointToHostAction" -j ACCEPT
COMMIT
# Completed on Sun Jan 27 17:28:06 2019

on node
/home/weave # iptables-save
# Generated by iptables-save v1.6.1 on Sun Jan 27 17:27:51 2019
*raw
:PREROUTING ACCEPT [12008814:5835415598]
:OUTPUT ACCEPT [19651257:1763124967]
:cali-OUTPUT - [0:0]
:cali-PREROUTING - [0:0]
:cali-failsafe-in - [0:0]
:cali-failsafe-out - [0:0]
:cali-from-host-endpoint - [0:0]
:cali-to-host-endpoint - [0:0]
-A PREROUTING -m comment --comment "cali:6gwbT8clXdHdC1b1" -j cali-PREROUTING
-A OUTPUT -m comment --comment "cali:tVnHkvAo15HuiPy0" -j cali-OUTPUT
-A cali-OUTPUT -m comment --comment "cali:njdnLwYeGqBJyMxW" -j MARK --set-xmark 0x0/0xf0000
-A cali-OUTPUT -m comment --comment "cali:rz86uTUcEZAfFsh7" -j cali-to-host-endpoint
-A cali-OUTPUT -m comment --comment "cali:pN0F5zD0b8yf9W1Z" -m mark --mark 0x10000/0x10000 -j ACCEPT
-A cali-PREROUTING -m comment --comment "cali:XFX5xbM8B9qR10JG" -j MARK --set-xmark 0x0/0xf0000
-A cali-PREROUTING -i cali+ -m comment --comment "cali:EWMPb0zVROM-woQp" -j MARK --set-xmark 0x40000/0x40000
-A cali-PREROUTING -m comment --comment "cali:Ek_rsNpunyDlK3sH" -m mark --mark 0x0/0x40000 -j cali-from-host-endpoint
-A cali-PREROUTING -m comment --comment "cali:nM-DzTFPwQbQvtRj" -m mark --mark 0x10000/0x10000 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:wWFQM43tJU7wwnFZ" -m multiport --dports 22 -j ACCEPT
-A cali-failsafe-in -p udp -m comment --comment "cali:LwNV--R8MjeUYacw" -m multiport --dports 68 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:QOO5NUOqOSS1_Iw0" -m multiport --dports 179 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:cwZWoBSwVeIAZmVN" -m multiport --dports 2379 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:7FbNXT91kugE_upR" -m multiport --dports 2380 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:ywE9WYUBEpve70WT" -m multiport --dports 6666 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:l-WQSVBf_lygPR0J" -m multiport --dports 6667 -j ACCEPT
-A cali-failsafe-in -p udp -m comment --comment "cali:k9jPBsnz833bYNtN" -m multiport --sports 53 -j ACCEPT
-A cali-failsafe-in -p udp -m comment --comment "cali:h6bDkHXiHjFdQFvi" -m multiport --sports 67 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:ZxyjJQRmKuKXDHob" -m multiport --sports 179 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:simwjHaxrPmaHOEO" -m multiport --sports 2379 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:hvk-Re2iN6cMDIO-" -m multiport --sports 2380 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:czejYL2nB2RLhrhj" -m multiport --sports 6666 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:Poam7ro8PATnz_3V" -m multiport --sports 6667 -j ACCEPT
-A cali-failsafe-out -p udp -m comment --comment "cali:82hjfji-wChFhAqL" -m multiport --dports 53 -j ACCEPT
-A cali-failsafe-out -p udp -m comment --comment "cali:TNM3RfEjbNr72hgH" -m multiport --dports 67 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:ycxKitIl4u3dK0HR" -m multiport --dports 179 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:hxjEWyxdkXXkdvut" -m multiport --dports 2379 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:cA_GLtruuvG88KiO" -m multiport --dports 2380 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:Sb1hkLYFMrKS6r01" -m multiport --dports 6666 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:UwLSebGONJUG4yG-" -m multiport --dports 6667 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:r23CvAiW0ROtMTyk" -m multiport --sports 22 -j ACCEPT
-A cali-failsafe-out -p udp -m comment --comment "cali:D9jU-Lf4ZjKkTtdD" -m multiport --sports 68 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:5zDpOHUwMrjzLzZl" -m multiport --sports 179 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:Jq44rynzFYoWGr4q" -m multiport --sports 2379 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:OiGBCpR5GP0HW_y6" -m multiport --sports 2380 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:iwXWeITN771fTZ2N" -m multiport --sports 6666 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:Ot9A94gzys2kTtDj" -m multiport --sports 6667 -j ACCEPT
COMMIT
# Completed on Sun Jan 27 17:27:51 2019
# Generated by iptables-save v1.6.1 on Sun Jan 27 17:27:51 2019
*mangle
:PREROUTING ACCEPT [349769:26766857]
:INPUT ACCEPT [11844318:5810770547]
:FORWARD ACCEPT [164557:24650007]
:OUTPUT ACCEPT [19651257:1763124967]
:POSTROUTING ACCEPT [19815810:1787774638]
:cali-PREROUTING - [0:0]
:cali-failsafe-in - [0:0]
:cali-from-host-endpoint - [0:0]
-A PREROUTING -m comment --comment "cali:6gwbT8clXdHdC1b1" -j cali-PREROUTING
-A cali-PREROUTING -m comment --comment "cali:6BJqBjBC7crtA-7-" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-PREROUTING -m comment --comment "cali:KX7AGNd6rMcDUai6" -m mark --mark 0x10000/0x10000 -j ACCEPT
-A cali-PREROUTING -i cali+ -m comment --comment "cali:i3igoQZv8mRXgdz5" -j ACCEPT
-A cali-PREROUTING -m comment --comment "cali:lgSUN0vEjQ4dIHbp" -j cali-from-host-endpoint
-A cali-PREROUTING -m comment --comment "cali:iqLD7MJ2v-mVyDd4" -m comment --comment "Host endpoint policy accepted packet." -m mark --mark 0x10000/0x10000 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:wWFQM43tJU7wwnFZ" -m multiport --dports 22 -j ACCEPT
-A cali-failsafe-in -p udp -m comment --comment "cali:LwNV--R8MjeUYacw" -m multiport --dports 68 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:QOO5NUOqOSS1_Iw0" -m multiport --dports 179 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:cwZWoBSwVeIAZmVN" -m multiport --dports 2379 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:7FbNXT91kugE_upR" -m multiport --dports 2380 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:ywE9WYUBEpve70WT" -m multiport --dports 6666 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:l-WQSVBf_lygPR0J" -m multiport --dports 6667 -j ACCEPT
COMMIT
# Completed on Sun Jan 27 17:27:51 2019
# Generated by iptables-save v1.6.1 on Sun Jan 27 17:27:51 2019
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [37:2679]
:POSTROUTING ACCEPT [37:2679]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-3DU66DE6VORVEQVD - [0:0]
:KUBE-SEP-ES652KEXJGQ2TLCD - [0:0]
:KUBE-SEP-JSIVJYPPJ7NNPWD6 - [0:0]
:KUBE-SEP-OWSQUZWWYSICGE4Z - [0:0]
:KUBE-SEP-PXATIYR3LRU2YWAO - [0:0]
:KUBE-SEP-RXSSON45FFCILO4Z - [0:0]
:KUBE-SEP-S4MK5EVI7CLHCCS6 - [0:0]
:KUBE-SEP-ZT5TVM6PMFDFQAMO - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-I7TGNIU6JERYTRFQ - [0:0]
:KUBE-SVC-K7J76NXP7AUZVFGS - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-XGLOHA7QRQ3V22RZ - [0:0]
:WEAVE - [0:0]
:cali-OUTPUT - [0:0]
:cali-POSTROUTING - [0:0]
:cali-PREROUTING - [0:0]
:cali-fip-dnat - [0:0]
:cali-fip-snat - [0:0]
:cali-nat-outgoing - [0:0]
-A PREROUTING -m comment --comment "cali:6gwbT8clXdHdC1b1" -j cali-PREROUTING
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "cali:tVnHkvAo15HuiPy0" -j cali-OUTPUT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -m comment --comment "cali:O3lYWMrLQYEMJtB5" -j cali-POSTROUTING
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -j WEAVE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-3DU66DE6VORVEQVD -s 10.32.0.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-3DU66DE6VORVEQVD -p udp -m udp -j DNAT --to-destination 10.32.0.3:53
-A KUBE-SEP-ES652KEXJGQ2TLCD -s 10.32.0.4/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-ES652KEXJGQ2TLCD -p udp -m udp -j DNAT --to-destination 10.32.0.4:53
-A KUBE-SEP-JSIVJYPPJ7NNPWD6 -s 192.168.0.15/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-JSIVJYPPJ7NNPWD6 -p tcp -m tcp -j DNAT --to-destination 192.168.0.15:6443
-A KUBE-SEP-OWSQUZWWYSICGE4Z -s 10.32.0.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-OWSQUZWWYSICGE4Z -p tcp -m tcp -j DNAT --to-destination 10.32.0.3:6379
-A KUBE-SEP-PXATIYR3LRU2YWAO -s 10.32.0.2/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-PXATIYR3LRU2YWAO -p tcp -m tcp -j DNAT --to-destination 10.32.0.2:44134
-A KUBE-SEP-RXSSON45FFCILO4Z -s 10.32.0.2/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-RXSSON45FFCILO4Z -p tcp -m tcp -j DNAT --to-destination 10.32.0.2:8443
-A KUBE-SEP-S4MK5EVI7CLHCCS6 -s 10.32.0.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-S4MK5EVI7CLHCCS6 -p tcp -m tcp -j DNAT --to-destination 10.32.0.3:53
-A KUBE-SEP-ZT5TVM6PMFDFQAMO -s 10.32.0.4/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-ZT5TVM6PMFDFQAMO -p tcp -m tcp -j DNAT --to-destination 10.32.0.4:53
-A KUBE-SERVICES ! -s 10.32.0.0/16 -d 10.96.253.48/32 -p tcp -m comment --comment "kube-system/tiller-deploy:tiller cluster IP" -m tcp --dport 44134 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.253.48/32 -p tcp -m comment --comment "kube-system/tiller-deploy:tiller cluster IP" -m tcp --dport 44134 -j KUBE-SVC-K7J76NXP7AUZVFGS
-A KUBE-SERVICES ! -s 10.32.0.0/16 -d 10.108.180.242/32 -p tcp -m comment --comment "default/redis-master:redis cluster IP" -m tcp --dport 6379 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.108.180.242/32 -p tcp -m comment --comment "default/redis-master:redis cluster IP" -m tcp --dport 6379 -j KUBE-SVC-I7TGNIU6JERYTRFQ
-A KUBE-SERVICES ! -s 10.32.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 10.32.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.32.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 10.32.0.0/16 -d 10.106.107.167/32 -p tcp -m comment --comment "kube-system/kubernetes-dashboard: cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.106.107.167/32 -p tcp -m comment --comment "kube-system/kubernetes-dashboard: cluster IP" -m tcp --dport 443 -j KUBE-SVC-XGLOHA7QRQ3V22RZ
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-S4MK5EVI7CLHCCS6
-A KUBE-SVC-ERIFXISQEP7F7OF4 -j KUBE-SEP-ZT5TVM6PMFDFQAMO
-A KUBE-SVC-I7TGNIU6JERYTRFQ -j KUBE-SEP-OWSQUZWWYSICGE4Z
-A KUBE-SVC-K7J76NXP7AUZVFGS -j KUBE-SEP-PXATIYR3LRU2YWAO
-A KUBE-SVC-NPX46M4PTMTKRN6Y -j KUBE-SEP-JSIVJYPPJ7NNPWD6
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-3DU66DE6VORVEQVD
-A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-ES652KEXJGQ2TLCD
-A KUBE-SVC-XGLOHA7QRQ3V22RZ -j KUBE-SEP-RXSSON45FFCILO4Z
-A WEAVE -s 10.32.0.0/16 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 10.32.0.0/16 -d 10.32.0.0/16 -j MASQUERADE
-A WEAVE -s 10.32.0.0/16 ! -d 10.32.0.0/16 -j MASQUERADE
-A cali-OUTPUT -m comment --comment "cali:GBTAv2p5CwevEyJm" -j cali-fip-dnat
-A cali-POSTROUTING -m comment --comment "cali:Z-c7XtVd2Bq7s_hA" -j cali-fip-snat
-A cali-POSTROUTING -m comment --comment "cali:nYKhEzDlr11Jccal" -j cali-nat-outgoing
-A cali-POSTROUTING -o tunl0 -m comment --comment "cali:JHlpT-eSqR1TvyYm" -m addrtype ! --src-type LOCAL --limit-iface-out -m addrtype --src-type LOCAL -j MASQUERADE
-A cali-PREROUTING -m comment --comment "cali:r6XmIziWUJsdOK6Z" -j cali-fip-dnat
-A cali-nat-outgoing -m comment --comment "cali:Dw4T8UWPnCLxRJiI" -m set --match-set cali40masq-ipam-pools src -m set ! --match-set cali40all-ipam-pools dst -j MASQUERADE
COMMIT
# Completed on Sun Jan 27 17:27:51 2019
# Generated by iptables-save v1.6.1 on Sun Jan 27 17:27:51 2019
*filter
:INPUT ACCEPT [480:106210]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [467:66360]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-SERVICES - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-EGRESS - [0:0]
:WEAVE-NPC-EGRESS-ACCEPT - [0:0]
:WEAVE-NPC-EGRESS-CUSTOM - [0:0]
:WEAVE-NPC-EGRESS-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
:cali-FORWARD - [0:0]
:cali-INPUT - [0:0]
:cali-OUTPUT - [0:0]
:cali-failsafe-in - [0:0]
:cali-failsafe-out - [0:0]
:cali-from-hep-forward - [0:0]
:cali-from-host-endpoint - [0:0]
:cali-from-wl-dispatch - [0:0]
:cali-fw-calid69f98e61f3 - [0:0]
:cali-fw-calie57963c92c1 - [0:0]
:cali-pri-_Iv5DLIoH8BjXD9FGvw - [0:0]
:cali-pri-kns.default - [0:0]
:cali-pri-kns.kube-system - [0:0]
:cali-pri-ksa.default.default - [0:0]
:cali-pro-_Iv5DLIoH8BjXD9FGvw - [0:0]
:cali-pro-kns.default - [0:0]
:cali-pro-kns.kube-system - [0:0]
:cali-pro-ksa.default.default - [0:0]
:cali-to-hep-forward - [0:0]
:cali-to-host-endpoint - [0:0]
:cali-to-wl-dispatch - [0:0]
:cali-tw-calid69f98e61f3 - [0:0]
:cali-tw-calie57963c92c1 - [0:0]
:cali-wl-to-host - [0:0]
-A INPUT -m comment --comment "cali:Cz_u1IQiXIMmKD4c" -j cali-INPUT
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A INPUT -i weave -j WEAVE-NPC-EGRESS
-A FORWARD -i weave -m comment --comment "NOTE: this must go before \'-j KUBE-FORWARD\'" -j WEAVE-NPC-EGRESS
-A FORWARD -o weave -m comment --comment "NOTE: this must go before \'-j KUBE-FORWARD\'" -j WEAVE-NPC
-A FORWARD -o weave -m state --state NEW -j NFLOG --nflog-group 86
-A FORWARD -o weave -j DROP
-A FORWARD -i weave ! -o weave -j ACCEPT
-A FORWARD -o weave -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -m comment --comment "cali:wUHhoiAYhphO9Mso" -j cali-FORWARD
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A OUTPUT -m comment --comment "cali:tVnHkvAo15HuiPy0" -j cali-OUTPUT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 10.32.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 10.32.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-SERVICES -d 10.106.228.218/32 -p tcp -m comment --comment "default/redis-slave:redis has no endpoints" -m tcp --dport 6379 -j REJECT --reject-with icmp-port-unreachable
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m physdev --physdev-out vethwe-bridge --physdev-is-bridged -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC-DEFAULT -m set --match-set weave-;rGqyMIl1HN^cfDki~Z$3]6!N dst -m comment --comment "DefaultAllow ingress isolation for namespace: default" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-P.B|!ZhkAr5q=XZ?3}tMBA+0 dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-system" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-Rzff}h:=]JaaJl/G;(XJpGjZ[ dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-public" -j ACCEPT
-A WEAVE-NPC-EGRESS -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC-EGRESS -m physdev --physdev-in vethwe-bridge --physdev-is-bridged -j RETURN
-A WEAVE-NPC-EGRESS -m addrtype --dst-type LOCAL -j RETURN
-A WEAVE-NPC-EGRESS -d 224.0.0.0/4 -j RETURN
-A WEAVE-NPC-EGRESS -m state --state NEW -j WEAVE-NPC-EGRESS-DEFAULT
-A WEAVE-NPC-EGRESS -m state --state NEW -m mark ! --mark 0x40000/0x40000 -j WEAVE-NPC-EGRESS-CUSTOM
-A WEAVE-NPC-EGRESS -m state --state NEW -m mark ! --mark 0x40000/0x40000 -j NFLOG --nflog-group 86
-A WEAVE-NPC-EGRESS -m mark ! --mark 0x40000/0x40000 -j DROP
-A WEAVE-NPC-EGRESS-ACCEPT -j MARK --set-xmark 0x40000/0x40000
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src -m comment --comment "DefaultAllow egress isolation for namespace: kube-system" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src -m comment --comment "DefaultAllow egress isolation for namespace: kube-system" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src -m comment --comment "DefaultAllow egress isolation for namespace: kube-public" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src -m comment --comment "DefaultAllow egress isolation for namespace: kube-public" -j RETURN
-A cali-FORWARD -m comment --comment "cali:vjrMJCRpqwy5oRoX" -j MARK --set-xmark 0x0/0xe0000
-A cali-FORWARD -m comment --comment "cali:A_sPAO0mcxbT9mOV" -m mark --mark 0x0/0x10000 -j cali-from-hep-forward
-A cali-FORWARD -i cali+ -m comment --comment "cali:8ZoYfO5HKXWbB3pk" -j cali-from-wl-dispatch
-A cali-FORWARD -o cali+ -m comment --comment "cali:jdEuaPBe14V2hutn" -j cali-to-wl-dispatch
-A cali-FORWARD -m comment --comment "cali:12bc6HljsMKsmfr-" -j cali-to-hep-forward
-A cali-FORWARD -m comment --comment "cali:MH9kMp5aNICL-Olv" -m comment --comment "Policy explicitly accepted packet." -m mark --mark 0x10000/0x10000 -j ACCEPT
-A cali-INPUT -m comment --comment "cali:msRIDfJRWnYwzW4g" -m mark --mark 0x10000/0x10000 -j ACCEPT
-A cali-INPUT -p ipencap -m comment --comment "cali:1IRlRis1-pHsGnX5" -m comment --comment "Allow IPIP packets from Calico hosts" -m set --match-set cali40all-hosts-net src -m addrtype --dst-type LOCAL -j ACCEPT
-A cali-INPUT -p ipencap -m comment --comment "cali:jX63A0VGotRJWnUL" -m comment --comment "Drop IPIP packets from non-Calico hosts" -j DROP
-A cali-INPUT -i cali+ -m comment --comment "cali:Dit8xicL3zTIYYlp" -g cali-wl-to-host
-A cali-INPUT -m comment --comment "cali:LCGWUV2ju3tJmfW0" -j MARK --set-xmark 0x0/0xf0000
-A cali-INPUT -m comment --comment "cali:x-gEznubq2huN2Fo" -j cali-from-host-endpoint
-A cali-INPUT -m comment --comment "cali:m27NaAhoKHLs1plD" -m comment --comment "Host endpoint policy accepted packet." -m mark --mark 0x10000/0x10000 -j ACCEPT
-A cali-OUTPUT -m comment --comment "cali:Mq1_rAdXXH3YkrzW" -m mark --mark 0x10000/0x10000 -j ACCEPT
-A cali-OUTPUT -o cali+ -m comment --comment "cali:69FkRTJDvD5Vu6Vl" -j RETURN
-A cali-OUTPUT -p ipencap -m comment --comment "cali:AnEsmO6bDZbQntWW" -m comment --comment "Allow IPIP packets to other Calico hosts" -m set --match-set cali40all-hosts-net dst -m addrtype --src-type LOCAL -j ACCEPT
-A cali-OUTPUT -m comment --comment "cali:9e9Uf3GU5tX--Lxy" -j MARK --set-xmark 0x0/0xf0000
-A cali-OUTPUT -m comment --comment "cali:OB2pzPrvQn6PC89t" -j cali-to-host-endpoint
-A cali-OUTPUT -m comment --comment "cali:tvSSMDBWrme3CUqM" -m comment --comment "Host endpoint policy accepted packet." -m mark --mark 0x10000/0x10000 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:wWFQM43tJU7wwnFZ" -m multiport --dports 22 -j ACCEPT
-A cali-failsafe-in -p udp -m comment --comment "cali:LwNV--R8MjeUYacw" -m multiport --dports 68 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:QOO5NUOqOSS1_Iw0" -m multiport --dports 179 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:cwZWoBSwVeIAZmVN" -m multiport --dports 2379 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:7FbNXT91kugE_upR" -m multiport --dports 2380 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:ywE9WYUBEpve70WT" -m multiport --dports 6666 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:l-WQSVBf_lygPR0J" -m multiport --dports 6667 -j ACCEPT
-A cali-failsafe-out -p udp -m comment --comment "cali:82hjfji-wChFhAqL" -m multiport --dports 53 -j ACCEPT
-A cali-failsafe-out -p udp -m comment --comment "cali:TNM3RfEjbNr72hgH" -m multiport --dports 67 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:ycxKitIl4u3dK0HR" -m multiport --dports 179 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:hxjEWyxdkXXkdvut" -m multiport --dports 2379 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:cA_GLtruuvG88KiO" -m multiport --dports 2380 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:Sb1hkLYFMrKS6r01" -m multiport --dports 6666 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:UwLSebGONJUG4yG-" -m multiport --dports 6667 -j ACCEPT
-A cali-from-wl-dispatch -i calid69f98e61f3 -m comment --comment "cali:aATGT4GuOz3DDPBf" -g cali-fw-calid69f98e61f3
-A cali-from-wl-dispatch -i calie57963c92c1 -m comment --comment "cali:iy_l2h5QoVD0FD0o" -g cali-fw-calie57963c92c1
-A cali-from-wl-dispatch -m comment --comment "cali:5ya_yU9ODOJn2XH1" -m comment --comment "Unknown interface" -j DROP
-A cali-fw-calid69f98e61f3 -m comment --comment "cali:M1FgTabkFTq_KecF" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-fw-calid69f98e61f3 -m comment --comment "cali:Kj669ngA_CFfEwcY" -m conntrack --ctstate INVALID -j DROP
-A cali-fw-calid69f98e61f3 -m comment --comment "cali:utDc9HGgTXGJ80A5" -j MARK --set-xmark 0x0/0x10000
-A cali-fw-calid69f98e61f3 -m comment --comment "cali:E5TgKdB3EpWXsxLr" -j cali-pro-kns.default
-A cali-fw-calid69f98e61f3 -m comment --comment "cali:oPIvg6Mlh5NjhyUH" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-fw-calid69f98e61f3 -m comment --comment "cali:pxLMlh7Na341DWmq" -j cali-pro-ksa.default.default
-A cali-fw-calid69f98e61f3 -m comment --comment "cali:dA6_T48iCoDRe_y_" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-fw-calid69f98e61f3 -m comment --comment "cali:m7Xb859TT2j2UyYM" -m comment --comment "Drop if no profiles matched" -j DROP
-A cali-fw-calie57963c92c1 -m comment --comment "cali:x0y5EsvuSPAEWddi" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-fw-calie57963c92c1 -m comment --comment "cali:ggAEMw8_DWnE7BnC" -m conntrack --ctstate INVALID -j DROP
-A cali-fw-calie57963c92c1 -m comment --comment "cali:MTKpUJQqvUClcTpE" -j MARK --set-xmark 0x0/0x10000
-A cali-fw-calie57963c92c1 -m comment --comment "cali:gV4kRA1sFRttb5Eb" -j cali-pro-kns.kube-system
-A cali-fw-calie57963c92c1 -m comment --comment "cali:TTh3gVb5PQViSR9y" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-fw-calie57963c92c1 -m comment --comment "cali:G-Ou0IRbZd8LuZWZ" -j cali-pro-_Iv5DLIoH8BjXD9FGvw
-A cali-fw-calie57963c92c1 -m comment --comment "cali:fE0KqFIMTpP5u2H-" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-fw-calie57963c92c1 -m comment --comment "cali:VtMVzlOvm1nGzWL0" -m comment --comment "Drop if no profiles matched" -j DROP
-A cali-pri-kns.default -m comment --comment "cali:7Fnh7Pv3_98FtLW7" -j MARK --set-xmark 0x10000/0x10000
-A cali-pri-kns.default -m comment --comment "cali:ZbV6bJXWSRefjK0u" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-pri-kns.kube-system -m comment --comment "cali:zoH5gU6U55FKZxEo" -j MARK --set-xmark 0x10000/0x10000
-A cali-pri-kns.kube-system -m comment --comment "cali:bcGRIJcyOS9dgBiB" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-pro-kns.default -m comment --comment "cali:oLzzje5WExbgfib5" -j MARK --set-xmark 0x10000/0x10000
-A cali-pro-kns.default -m comment --comment "cali:4goskqvxh5xcGw3s" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-pro-kns.kube-system -m comment --comment "cali:-50oJuMfLVO3LkBk" -j MARK --set-xmark 0x10000/0x10000
-A cali-pro-kns.kube-system -m comment --comment "cali:ztVPKv1UYejNzm1g" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-to-wl-dispatch -o calid69f98e61f3 -m comment --comment "cali:7k75KtchKSdLlSe-" -g cali-tw-calid69f98e61f3
-A cali-to-wl-dispatch -o calie57963c92c1 -m comment --comment "cali:U8q68r9SudrbRSyr" -g cali-tw-calie57963c92c1
-A cali-to-wl-dispatch -m comment --comment "cali:ionfrxLYMgwD1t0q" -m comment --comment "Unknown interface" -j DROP
-A cali-tw-calid69f98e61f3 -m comment --comment "cali:kXLkOpfBbP2_5se0" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-tw-calid69f98e61f3 -m comment --comment "cali:OccCGj9evk8sGicD" -m conntrack --ctstate INVALID -j DROP
-A cali-tw-calid69f98e61f3 -m comment --comment "cali:kFA-M9MWFdin3VJe" -j MARK --set-xmark 0x0/0x10000
-A cali-tw-calid69f98e61f3 -m comment --comment "cali:XGi0ObKQONFRUYee" -j cali-pri-kns.default
-A cali-tw-calid69f98e61f3 -m comment --comment "cali:sGgJejUV3LWr-fPv" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-tw-calid69f98e61f3 -m comment --comment "cali:8R1vKpBJ_vwsyE6G" -j cali-pri-ksa.default.default
-A cali-tw-calid69f98e61f3 -m comment --comment "cali:r3CH53nm3sbs8tQL" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-tw-calid69f98e61f3 -m comment --comment "cali:rrDz9Nl6mOdt5nCs" -m comment --comment "Drop if no profiles matched" -j DROP
-A cali-tw-calie57963c92c1 -m comment --comment "cali:fynk1pKINesovEfP" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-tw-calie57963c92c1 -m comment --comment "cali:7QnW7LSOWxyU4Ce9" -m conntrack --ctstate INVALID -j DROP
-A cali-tw-calie57963c92c1 -m comment --comment "cali:jWYXokTZZKkpqj1t" -j MARK --set-xmark 0x0/0x10000
-A cali-tw-calie57963c92c1 -m comment --comment "cali:LUviVXgSjoOTD0V2" -j cali-pri-kns.kube-system
-A cali-tw-calie57963c92c1 -m comment --comment "cali:AQMNxM8IEEdnUZr-" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-tw-calie57963c92c1 -m comment --comment "cali:UUrJTd2a7xnFrO7C" -j cali-pri-_Iv5DLIoH8BjXD9FGvw
-A cali-tw-calie57963c92c1 -m comment --comment "cali:I7muC_tsiT3IqlI2" -m comment --comment "Return if profile accepted" -m mark --mark 0x10000/0x10000 -j RETURN
-A cali-tw-calie57963c92c1 -m comment --comment "cali:_igL_S4G30jZpuv7" -m comment --comment "Drop if no profiles matched" -j DROP
-A cali-wl-to-host -m comment --comment "cali:Ee9Sbo10IpVujdIY" -j cali-from-wl-dispatch
-A cali-wl-to-host -m comment --comment "cali:nSZbcOoG1xPONxb8" -m comment --comment "Configured DefaultEndpointToHostAction" -j ACCEPT
COMMIT
# Completed on Sun Jan 27 17:27:51 2019
bboreham commented 5 years ago

Weave Net takes the addresses to connect to from Kubernetes. It does not specify the interface to connect from. That will be determined by the host routing rules.

However you seem to have a deeper problem:

IP allocation was seeded by different peers (received: [4a:ba:4c:fa:5c:b3(tracker)], ours: [6a:6c:c9:b5:18:8a(getter-test)]), retry: 2019-01-27 17:00:44.356836268 +0000 UTC m=+6152.062472741 

Under this condition you will never have a fully-connected network. Did you start those nodes at different times, or try to disconnect from one network and connect to another?

You can try to recover as described here

yakneens commented 5 years ago

Thanks for your response @bboreham. I was able to follow the instructions at the link you sent and after deleting the /var/lib/weave/weave-netdata.db at the master node and deleting the weave pod on the same node, it was able to peer with the rest of the nodes. I still have some concerns that weave is confused about which IP address to use (or is maybe inconsistent). After the recovery process I see the following on the node that ended up orphaned:

[root@tracker weave]# weave status connections
-> 192.168.0.72:6783     established fastdp 6a:6c:c9:b5:18:8a(getter-test) mtu=1376
-> 192.168.0.61:6783     established fastdp e2:fd:43:23:90:d2(worker-3) mtu=1376
-> 10.35.104.5:6783      failed      cannot connect to ourself, retry: never 

whereas on another node:

[root@worker-3 var]# weave status connections
<- 192.168.0.15:33471    established fastdp 4a:ba:4c:fa:5c:b3(tracker) mtu=1376
-> 192.168.0.72:6783     established fastdp 6a:6c:c9:b5:18:8a(getter-test) mtu=1376
-> 10.35.104.5:6783      failed      Multiple connections to 4a:ba:4c:fa:5c:b3(tracker) added to e2:fd:43:23:90:d2(worker-3), retry: 2019-01-28 09:20:01.186284464 +0000 UTC m=+64908.891920940 
-> 192.168.0.61:6783     failed      cannot connect to ourself, retry: never 

so, to me it looks like on the first node weave is still trying to talk to itself at the IP address 10.35.104.5 which is on the wrong interface (eth1 on that node, rather than the correct eth0 where the IP is 192.168.0.15). With that in mind, I don't quite understand your first comment about how weave figures out what IP address to use for communication. Would you mind clarifying a bit how this works? Assume that VMs have multiple networks (as in my example), and only one of these is set up for the VMs to be able communicate to each other. In my scenario it's the one with IP range 192.168.0.15/24 that ends up on eth0 on the VMs. The other networks are there to expose some NFS shares to the VMs and are not set up for VM-to-VM communication. I installed weave following the Kubernetes guide at (https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/). The only custom settings that I applied were the kubeadm config settings as follows:

[root@tracker initial]# cat kubeadm_config_map.yaml 
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
  kubeletExtraArgs:
    cloud-provider: "openstack"
    cloud-config: "/etc/kubernetes/cloud.conf"
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.13.0
apiServer:
  extraArgs:
    cloud-provider: "openstack"
    cloud-config: "/etc/kubernetes/cloud.conf"
  extraVolumes:
  - name: cloud
    hostPath: "/etc/kubernetes/cloud.conf"
    mountPath: "/etc/kubernetes/cloud.conf"
controllerManager:
  extraArgs:
    cloud-provider: "openstack"
    cloud-config: "/etc/kubernetes/cloud.conf"
  extraVolumes:
  - name: cloud
    hostPath: "/etc/kubernetes/cloud.conf"
    mountPath: "/etc/kubernetes/cloud.conf"
networking:
  dnsDomain: cluster.local
  podSubnet: 10.32.0.0/16 

and correspondingly I installed weave using

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE=10.32.0.0/16"

How does weave in this case decide which IP address the node should advertise itself on for peering? It looks like for my master node it chose an IP on eth1, whereas on the other nodes it chose IPs on eth0, leading to the issue I described.

Thank you for any insights.

bboreham commented 5 years ago

How does weave in this case decide which IP address the node should advertise itself on for peering?

There is no "advertise" as such; it happens indirectly.

  1. The Weave Net daemon listens on the address specified by -host on the command-line (default blank, meaning all addresses)
  2. Weave Net Kubernetes integration tells each node to connect out to other nodes using addresses taken from Kubernetes: it will prefer an address marked as private. The source address for these connections is decided by Linux, using its routing table.
  3. Weave Net nodes gossip to each other about existing connections, and all nodes will speculatively try all addresses that worked for someone else. This can be disabled with the CLI flag --no-discovery.

It is expected that some addresses will turn out to map to the same node as other addresses, also expected that one node cannot figure this out (e.g. because of NAT) without trying them.

weave status peers will print the complete map as known by one node.

yakneens commented 5 years ago

Thanks, so point number 2 seems a bit opaque to me still. Is there a particular API call that weave makes to k8s to get a list of IP addresses to connect out to? Is there any way to control which IP addresses end up on the list? For instance, I just added another node to my cluster and things seem to have gotten messy again.

[root@tracker redis]# weave status peers
4a:ba:4c:fa:5c:b3(tracker)
   -> 192.168.0.72:6783     6a:6c:c9:b5:18:8a(getter-test)        established
   -> 192.168.0.61:6783     e2:fd:43:23:90:d2(worker-3)           established
   <- 10.35.104.51:37637    ee:4f:3a:09:e0:ea(worker-2)           established
6a:6c:c9:b5:18:8a(getter-test)
   <- 192.168.0.61:49580    e2:fd:43:23:90:d2(worker-3)           established
   <- 192.168.0.15:42008    4a:ba:4c:fa:5c:b3(tracker)            established
   <- 192.168.0.10:58046    ee:4f:3a:09:e0:ea(worker-2)           established
e2:fd:43:23:90:d2(worker-3)
   -> 192.168.0.10:6783     ee:4f:3a:09:e0:ea(worker-2)           established
   <- 192.168.0.15:33471    4a:ba:4c:fa:5c:b3(tracker)            established
   -> 192.168.0.72:6783     6a:6c:c9:b5:18:8a(getter-test)        established
ee:4f:3a:09:e0:ea(worker-2)
   -> 10.35.104.5:6783      4a:ba:4c:fa:5c:b3(tracker)            established
   -> 192.168.0.72:6783     6a:6c:c9:b5:18:8a(getter-test)        established
   <- 192.168.0.61:33543    e2:fd:43:23:90:d2(worker-3)           established

My interpretation of this is that the tracker node thinks that worker-2 (the new node) is on 10.35.104.51 whereas worker-3 thinks worker-2 is on 192.168.0.10. worker-2 thinks tracker is on 10.35.104.5 whereas worker-3 thinks tracker is on 192.168.0.15.

If I look at the routing tables I see the following:

[root@tracker redis]# ip route
default via 192.168.0.1 dev eth0 
10.32.0.0/16 dev weave proto kernel scope link src 10.32.64.0 
10.35.104.0/24 dev eth1 proto kernel scope link src 10.35.104.5 
10.35.105.0/24 dev eth2 proto kernel scope link src 10.35.105.10 
10.35.110.0/24 dev eth3 proto kernel scope link src 10.35.110.13 
169.254.169.254 via 192.168.0.1 dev eth0 proto static 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.15 
blackhole 192.169.45.0/26 proto bird 

[root@worker-2 lib]# ip route
default via 192.168.0.1 dev eth0 
10.32.0.0/16 dev weave proto kernel scope link src 10.32.160.0 
10.35.104.0/24 dev eth1 proto kernel scope link src 10.35.104.51 
10.35.105.0/24 dev eth2 proto kernel scope link src 10.35.105.45 
10.35.110.0/24 dev eth3 proto kernel scope link src 10.35.110.52 
169.254.169.254 via 192.168.0.1 dev eth0 proto static 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.10 
blackhole 192.169.2.0/24 proto bird 
192.169.3.0/24 via 10.35.110.62 dev eth3 proto bird 

Doesn't this mean that everything should try to route through the 192.168.0.15/24 network? I don't quite understand how to get weave to use the network I want or stop using the networks I don't want. Thanks again.

--

bboreham commented 5 years ago

is there a particular API call that weave makes to k8s to get a list of IP addresses to connect out to?

Yes, it's the same as what you should see from kubectl get nodes -o wide

Doesn't this mean that everything should try to route through the 192.168.0.15/24 network?

Your routing tables give many options depending on the destination; a different src address will be used according to the rules.

yakneens commented 5 years ago

So kubectl get nodes -o wide returns:

[root@tracker redis]# kubectl get nodes -o wide
NAME          STATUS   ROLES    AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
getter-test   Ready    <none>   20h   v1.13.2   192.168.0.72   <none>        CentOS Linux 7 (Core)   3.10.0-693.5.2.el7.x86_64   docker://18.9.1
tracker       Ready    master   21h   v1.13.2   192.168.0.15   <none>        CentOS Linux 7 (Core)   3.10.0-693.5.2.el7.x86_64   docker://18.9.1
worker-2      Ready    <none>   93m   v1.13.2   192.168.0.10   <none>        CentOS Linux 7 (Core)   3.10.0-693.5.2.el7.x86_64   docker://18.9.1
worker-3      Ready    <none>   20h   v1.13.2   192.168.0.61   <none>        CentOS Linux 7 (Core)   3.10.0-693.5.2.el7.x86_64   docker://18.9.1

all IPs are on the 192.168.0.0/24 subnet, yet weave somehow ends up with addresses from the 10.35.104.5/24 subnet on some nodes.

bboreham commented 5 years ago

Actually we can see it going wrong on startup in the logs you posted:

master:

INFO: 2019/01/27 13:47:19.143236 Launch detected - using supplied peer list: [10.35.104.5]

worker-3:

INFO: 2019/01/27 15:18:13.573258 Launch detected - using supplied peer list: [192.168.0.72 10.35.104.5 192.168.0.61]

This "supplied peer list" comes from the call in the kube-utils program I pointed at earlier.

If you restart those Weave Net pods (you can just kubectl delete one and it will be recreated) do they have the same "supplied peer list" ?

If so, can you do kubectl get nodes -o yaml - see if there is something in the detail that is different from the -o wide version?

You can double-check the result - kubectl exec into a Weave Net pod and run /home/weave/kube-utils

yakneens commented 5 years ago

So, indeed, the yaml version shows that the master node lists several IP addresses, while the other nodes only show one. Where is this info populated from?

- apiVersion: v1
  kind: Node
  metadata:
    annotations:
      kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
      node.alpha.kubernetes.io/ttl: "0"
      volumes.kubernetes.io/controller-managed-attach-detach: "true"
    creationTimestamp: "2019-01-27T13:42:40Z"
    labels:
      beta.kubernetes.io/arch: amd64
      beta.kubernetes.io/instance-type: "16"
      beta.kubernetes.io/os: linux
      failure-domain.beta.kubernetes.io/zone: AZ_1
      kubernetes.io/hostname: tracker
      node-role.kubernetes.io/master: ""
    name: tracker
    resourceVersion: "128914"
    selfLink: /api/v1/nodes/tracker
    uid: 6a623f7a-2239-11e9-a615-fa163e9a84ea
  spec:
    podCIDR: 10.32.0.0/24
    providerID: openstack:///9abff20f-9775-44fd-8778-ce5d66c48082
  status:
    addresses:
    - address: 192.168.0.15
      type: InternalIP
    - address: 10.35.105.10
      type: InternalIP
    - address: 10.35.110.13
      type: InternalIP
    - address: 10.35.104.5
      type: InternalIP
    allocatable:
      cpu: "8"
      ephemeral-storage: "19316953466"
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      memory: 32678824Ki
      pods: "110"
    capacity:
      cpu: "8"
      ephemeral-storage: 20960236Ki
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      memory: 32781224Ki
      pods: "110"
    conditions:
    - lastHeartbeatTime: "2019-01-28T08:59:17Z"
      lastTransitionTime: "2019-01-28T08:59:17Z"
      message: Weave pod has set this
      reason: WeaveIsUp
      status: "False"
      type: NetworkUnavailable
    - lastHeartbeatTime: "2019-01-28T12:12:59Z"
      lastTransitionTime: "2019-01-28T11:50:08Z"
      message: kubelet has sufficient memory available
      reason: KubeletHasSufficientMemory
      status: "False"
      type: MemoryPressure
    - lastHeartbeatTime: "2019-01-28T12:12:59Z"
      lastTransitionTime: "2019-01-28T11:50:08Z"
      message: kubelet has no disk pressure
      reason: KubeletHasNoDiskPressure
      status: "False"
      type: DiskPressure
    - lastHeartbeatTime: "2019-01-28T12:12:59Z"
      lastTransitionTime: "2019-01-28T11:50:08Z"
      message: kubelet has sufficient PID available
      reason: KubeletHasSufficientPID
      status: "False"
      type: PIDPressure
    - lastHeartbeatTime: "2019-01-28T12:12:59Z"
      lastTransitionTime: "2019-01-28T11:50:08Z"
      message: kubelet is posting ready status
      reason: KubeletReady
      status: "True"
      type: Ready
    - lastHeartbeatTime: "2019-01-27T13:42:40Z"
      lastTransitionTime: "2019-01-27T13:43:57Z"
      message: Kubelet never posted node status.
      reason: NodeStatusNeverUpdated
      status: Unknown
      type: OutOfDisk
    daemonEndpoints:
      kubeletEndpoint:
        Port: 10250
...
    nodeInfo:
      architecture: amd64
      bootID: f4c58653-d452-4101-94eb-2d490da88e09
      containerRuntimeVersion: docker://18.9.1
      kernelVersion: 3.10.0-693.5.2.el7.x86_64
      kubeProxyVersion: v1.13.2
      kubeletVersion: v1.13.2
      machineID: ea4bb35ac7b55a78c3c4fb8cdb4b7f65
      operatingSystem: linux
      osImage: CentOS Linux 7 (Core)
      systemUUID: 9ABFF20F-9775-44FD-8778-CE5D66C48082

kube-utils on master

/home/weave #  /home/weave/kube-utils
192.168.0.43
192.168.0.72
192.168.0.9
10.35.104.5
192.168.0.10
192.168.0.61

That list stays the same after restarting the pod. But weave status peers seems to return the correct addresses everywhere now (compare to the output in my previous message).

[root@worker-3 var]# weave status peers
4a:ba:4c:fa:5c:b3(tracker)
   -> 192.168.0.61:6783     e2:fd:43:23:90:d2(worker-3)           established
   -> 192.168.0.72:6783     6a:6c:c9:b5:18:8a(getter-test)        established
   -> 192.168.0.9:6783      32:16:2e:7f:ea:3a(salt-master)        established
   -> 192.168.0.10:6783     ee:4f:3a:09:e0:ea(worker-2)           established
6a:6c:c9:b5:18:8a(getter-test)
   <- 192.168.0.10:58046    ee:4f:3a:09:e0:ea(worker-2)           established
   -> 192.168.0.9:6783      32:16:2e:7f:ea:3a(salt-master)        established
   <- 192.168.0.61:49580    e2:fd:43:23:90:d2(worker-3)           established
   <- 192.168.0.15:48676    4a:ba:4c:fa:5c:b3(tracker)            established
ee:4f:3a:09:e0:ea(worker-2)
   <- 192.168.0.9:60403     32:16:2e:7f:ea:3a(salt-master)        established
   <- 192.168.0.15:54916    4a:ba:4c:fa:5c:b3(tracker)            established
   -> 192.168.0.72:6783     6a:6c:c9:b5:18:8a(getter-test)        established
   <- 192.168.0.61:33543    e2:fd:43:23:90:d2(worker-3)           established
32:16:2e:7f:ea:3a(salt-master)
   -> 192.168.0.61:6783     e2:fd:43:23:90:d2(worker-3)           established
   -> 192.168.0.10:6783     ee:4f:3a:09:e0:ea(worker-2)           established
   <- 192.168.0.72:57506    6a:6c:c9:b5:18:8a(getter-test)        established
   <- 192.168.0.15:50695    4a:ba:4c:fa:5c:b3(tracker)            established
e2:fd:43:23:90:d2(worker-3)
   <- 192.168.0.15:49219    4a:ba:4c:fa:5c:b3(tracker)            established
   -> 192.168.0.72:6783     6a:6c:c9:b5:18:8a(getter-test)        established
   -> 192.168.0.10:6783     ee:4f:3a:09:e0:ea(worker-2)           established
   <- 192.168.0.9:35252     32:16:2e:7f:ea:3a(salt-master)        established
bboreham commented 5 years ago

Where is this info populated from?

Not something I know off the top of my head. Looking at https://github.com/kubernetes/kubernetes/issues/44702, it seems it may come from the cloud-provider (OpenStack?) or from the command-line to kubelet (--node-ip).

Maybe you could ask in the Kubernetes Slack?

yakneens commented 5 years ago

Created an issue at k8s repo - https://github.com/kubernetes/kubernetes/issues/73407

murali-reddy commented 5 years ago

@llevar Can we close this issue? Multiple addresses must be coming from openstack cloud provider as noted earlier.

amithapa commented 4 years ago

Weave Net takes the addresses to connect to from Kubernetes. It does not specify the interface to connect from. That will be determined by the host routing rules.

However you seem to have a deeper problem:

IP allocation was seeded by different peers (received: [4a:ba:4c:fa:5c:b3(tracker)], ours: [6a:6c:c9:b5:18:8a(getter-test)]), retry: 2019-01-27 17:00:44.356836268 +0000 UTC m=+6152.062472741 

Under this condition you will never have a fully-connected network. Did you start those nodes at different times, or try to disconnect from one network and connect to another?

You can try to recover as described here

Thank you so much!

dstrimble commented 4 years ago

Weave Net takes the addresses to connect to from Kubernetes. It does not specify the interface to connect from. That will be determined by the host routing rules.

However you seem to have a deeper problem:

IP allocation was seeded by different peers (received: [4a:ba:4c:fa:5c:b3(tracker)], ours: [6a:6c:c9:b5:18:8a(getter-test)]), retry: 2019-01-27 17:00:44.356836268 +0000 UTC m=+6152.062472741 

Under this condition you will never have a fully-connected network. Did you start those nodes at different times, or try to disconnect from one network and connect to another?

You can try to recover as described here

This fix doesn't seem to work for me. I have tried deleting the /var/lib/weave db file and restarting the node, but the same error persists. Which nodes specifically in this error should be restarted?

2020-01-20T12:39:27.551980486Z INFO: 2020/01/20 12:39:27.551692 ->[172.20.33.66:34547|a2:c6:1f:d4:5b:95(nodew00424.nonprod.company.com)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [16:c0:ca:ad:e4:62 1e:75:c6:8c:ea:73 7a:d3:c8:59:b6:f1], ours: [22:03:ad:09:70:b5(nodei00402.nonprod.company.com) 7e:a3:30:e9:2a:cc(nodei00404.nonprod.company.com) a2:9f:2f:6e:e1:86(nodei00403.nonprod.company.com) a2:9f:d5:6e:9c:c3(nodew00401.nonprod.company.com)])

The first node in the error message seems to be a random node, the bottom 4 recur in every error. I have tried deleting the /lib/var/weave data for those and restarting the vm but it continues

murali-reddy commented 4 years ago

@dstrimble please open a separate issue. Original issue was opened for different problem.

dstrimble commented 4 years ago

@dstrimble please open a separate issue. Original issue was opened for different problem.

https://github.com/weaveworks/weave/issues/3757 Done, thank you.