weaveworks / weave

Simple, resilient multi-host containers networking and more.
https://www.weave.works
Apache License 2.0
6.61k stars 668 forks source link

weave_ipam_unreachable_count showing peers that are gone and failed connections to active peers that status connections shows okay #3569

Open ephur opened 5 years ago

ephur commented 5 years ago

What you expected to happen?

ipam assignments from hosts to be correctly removed, and IPAM to report properly for all known good peers.

What happened?

when implementing more monitoring for our weave deployment we discovered that weave_ipam_unreachable_count was non zero for all of our weave pods. It appears

/home/weave # ./weave --local status connections
<- 10.0.156.17:57869     established encrypted   fastdp e2:63:06:b0:2c:be(ip-10-0-156-17.us-west-2.compute.internal) encrypted=truemtu=8192
-> 10.0.43.134:6783      established encrypted   fastdp 2e:c7:8e:0e:05:64(ip-10-0-43-134.us-west-2.compute.internal) encrypted=truemtu=8192
<- 10.0.175.253:47806    established encrypted   fastdp a6:ab:26:02:22:b8(ip-10-0-175-253.us-west-2.compute.internal) encrypted=truemtu=8192
<- 10.0.18.77:40441      established encrypted   fastdp f2:6c:e6:1d:cd:89(ip-10-0-18-77.us-west-2.compute.internal) encrypted=truemtu=8192
-> 10.0.159.195:6783     established encrypted   fastdp 96:bc:c8:6c:01:0a(ip-10-0-159-195.us-west-2.compute.internal) encrypted=truemtu=8192
<- 10.0.133.82:46024     established encrypted   fastdp f6:84:cc:b8:04:b8(ip-10-0-133-82.us-west-2.compute.internal) encrypted=truemtu=8192
<- 10.0.129.132:36752    established encrypted   fastdp be:9a:d3:30:c2:72(ip-10-0-129-132.us-west-2.compute.internal) encrypted=truemtu=8192
-> 10.0.14.0:6783        established encrypted   fastdp b2:e3:05:3f:42:ed(ip-10-0-14-0.us-west-2.compute.internal) encrypted=truemtu=8192
<- 10.0.166.79:42271     established encrypted   fastdp 36:5a:5f:3f:73:7c(ip-10-0-166-79.us-west-2.compute.internal) encrypted=truemtu=8192
-> 10.0.143.215:6783     established encrypted   fastdp f2:ba:f9:3f:1f:ed(ip-10-0-143-215.us-west-2.compute.internal) encrypted=truemtu=8192
-> 10.0.147.13:6783      established encrypted   fastdp 7e:92:93:75:31:6d(ip-10-0-147-13.us-west-2.compute.internal) encrypted=truemtu=8192
-> 10.0.175.60:6783      failed      cannot connect to ourself, retry: never 
/home/weave # ./weave --local status ipam
0e:14:2c:c7:d8:b8(ip-10-0-175-60.us-west-2.compute.internal)     4096 IPs (06.2% of total) (2 active)
2e:c7:8e:0e:05:64(ip-10-0-43-134.us-west-2.compute.internal)     8192 IPs (12.5% of total) 
2a:8a:c7:8d:cd:91(ip-10-0-156-221.us-west-2.compute.internal)     2048 IPs (03.1% of total) - unreachable!
f2:ba:f9:3f:1f:ed(ip-10-0-143-215.us-west-2.compute.internal)     2048 IPs (03.1% of total) 
36:5a:5f:3f:73:7c(ip-10-0-166-79.us-west-2.compute.internal)     2048 IPs (03.1% of total) 
8e:03:c8:7b:91:2e()                       4096 IPs (06.2% of total) - unreachable!
f6:84:cc:b8:04:b8(ip-10-0-133-82.us-west-2.compute.internal)     8192 IPs (12.5% of total) 
a2:0d:bd:a3:3c:99(ip-10-0-140-253.us-west-2.compute.internal)     2048 IPs (03.1% of total) - unreachable!
e2:63:06:b0:2c:be(ip-10-0-156-17.us-west-2.compute.internal)     2048 IPs (03.1% of total) 
46:52:08:d0:f6:8b()                       2048 IPs (03.1% of total) - unreachable!
a6:ab:26:02:22:b8(ip-10-0-175-253.us-west-2.compute.internal)     6144 IPs (09.4% of total) 
b2:e3:05:3f:42:ed(ip-10-0-14-0.us-west-2.compute.internal)     6144 IPs (09.4% of total) 
96:bc:c8:6c:01:0a(ip-10-0-159-195.us-west-2.compute.internal)     4096 IPs (06.2% of total) 
7e:92:93:75:31:6d(ip-10-0-147-13.us-west-2.compute.internal)     2048 IPs (03.1% of total) 
f2:6c:e6:1d:cd:89(ip-10-0-18-77.us-west-2.compute.internal)     9216 IPs (14.1% of total) 
be:9a:d3:30:c2:72(ip-10-0-129-132.us-west-2.compute.internal)     1024 IPs (01.6% of total) 

How to reproduce it?

We have not been able to reproduce yet in a cluster that does not experience this issue.

Anything else we need to know?

Our deployments at this time are on AWS. Our k8s clusters run 1.11.2, and are built by our own automation tooling. Weave is deployed as a helm chart, based off of the upstream YAML provided by weave works.

Versions:

/home/weave # ./weave --local version
weave 2.5.0

ubuntu@ip-10-0-175-60:~$ sudo docker version
Client:
 Version:           18.09.0
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        4d60db4
 Built:             Wed Nov  7 00:48:57 2018
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       4d60db4
  Built:            Wed Nov  7 00:16:44 2018
  OS/Arch:          linux/amd64
  Experimental:     false

ubuntu@ip-10-0-175-60:~$ uname -a
Linux ip-10-0-175-60 4.4.0-1073-aws #83-Ubuntu SMP Sat Nov 17 00:26:27 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

kubectl version                                       
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

Logs:

logs are large, and can be found at: https://gist.github.com/ephur/86d63c041ba5977eed259d5a87f34c0a

Network:

ubuntu@ip-10-0-175-60:~$ sudo ip route
default via 10.0.160.1 dev ens5 
10.0.160.0/20 dev ens5  proto kernel  scope link  src 10.0.175.60 
172.16.0.0/16 dev weave  proto kernel  scope link  src 172.16.96.0 
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.0.1 linkdown 

ubuntu@ip-10-0-175-60:~$ sudo ip -4 -o addr
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
2: ens5    inet 10.0.175.60/20 brd 10.0.175.255 scope global ens5\       valid_lft forever preferred_lft forever
3: docker0    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.57.230/32 brd 172.17.57.230 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.238.187/32 brd 172.17.238.187 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.236.3/32 brd 172.17.236.3 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.89.82/32 brd 172.17.89.82 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.238.133/32 brd 172.17.238.133 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.30.254/32 brd 172.17.30.254 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.122.18/32 brd 172.17.122.18 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.28.231/32 brd 172.17.28.231 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.216.50/32 brd 172.17.216.50 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.106.109/32 brd 172.17.106.109 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.0.2/32 brd 172.17.0.2 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.79.75/32 brd 172.17.79.75 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.76.68/32 brd 172.17.76.68 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.0.1/32 brd 172.17.0.1 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.188.28/32 brd 172.17.188.28 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.205.162/32 brd 172.17.205.162 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.105.231/32 brd 172.17.105.231 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.193.40/32 brd 172.17.193.40 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.44.61/32 brd 172.17.44.61 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.71.154/32 brd 172.17.71.154 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.138.53/32 brd 172.17.138.53 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.247.31/32 brd 172.17.247.31 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.2.167/32 brd 172.17.2.167 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.247.69/32 brd 172.17.247.69 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
5: kube-ipvs0    inet 172.17.47.9/32 brd 172.17.47.9 scope global kube-ipvs0\       valid_lft forever preferred_lft forever
8: weave    inet 172.16.96.0/16 brd 172.16.255.255 scope global weave\       valid_lft forever preferred_lft forever

# Generated by iptables-save v1.6.0 on Thu Dec 20 00:20:46 2018                                                                                                                                                                                                                                                               
*mangle                                                                                                                                                                                                                                                                                                                       
:PREROUTING ACCEPT [17859:4160512]                                                                                                                                                                                                                                                                                            
:INPUT ACCEPT [17547:4084324]
:FORWARD ACCEPT [312:76188]
:OUTPUT ACCEPT [17443:6809099]
:POSTROUTING ACCEPT [17648:6876438]
:WEAVE-IPSEC-IN - [0:0]
:WEAVE-IPSEC-IN-MARK - [0:0]
:WEAVE-IPSEC-OUT - [0:0]
:WEAVE-IPSEC-OUT-MARK - [0:0]
-A INPUT -j WEAVE-IPSEC-IN
-A OUTPUT -j WEAVE-IPSEC-OUT
-A WEAVE-IPSEC-IN -s 10.0.147.13/32 -d 10.0.175.60/32 -p esp -m esp --espspi 514127132 -j WEAVE-IPSEC-IN-MARK
-A WEAVE-IPSEC-IN -s 10.0.43.134/32 -d 10.0.175.60/32 -p esp -m esp --espspi 3663579081 -j WEAVE-IPSEC-IN-MARK
-A WEAVE-IPSEC-IN -s 10.0.159.195/32 -d 10.0.175.60/32 -p esp -m esp --espspi 1123295004 -j WEAVE-IPSEC-IN-MARK
-A WEAVE-IPSEC-IN -s 10.0.133.82/32 -d 10.0.175.60/32 -p esp -m esp --espspi 931311841 -j WEAVE-IPSEC-IN-MARK
-A WEAVE-IPSEC-IN -s 10.0.175.253/32 -d 10.0.175.60/32 -p esp -m esp --espspi 3506704996 -j WEAVE-IPSEC-IN-MARK
-A WEAVE-IPSEC-IN -s 10.0.129.132/32 -d 10.0.175.60/32 -p esp -m esp --espspi 1315114850 -j WEAVE-IPSEC-IN-MARK
-A WEAVE-IPSEC-IN -s 10.0.14.0/32 -d 10.0.175.60/32 -p esp -m esp --espspi 2394552906 -j WEAVE-IPSEC-IN-MARK
-A WEAVE-IPSEC-IN -s 10.0.166.79/32 -d 10.0.175.60/32 -p esp -m esp --espspi 3049329034 -j WEAVE-IPSEC-IN-MARK
-A WEAVE-IPSEC-IN -s 10.0.143.215/32 -d 10.0.175.60/32 -p esp -m esp --espspi 71870736 -j WEAVE-IPSEC-IN-MARK
-A WEAVE-IPSEC-IN -s 10.0.156.17/32 -d 10.0.175.60/32 -p esp -m esp --espspi 730024206 -j WEAVE-IPSEC-IN-MARK
-A WEAVE-IPSEC-IN -s 10.0.18.77/32 -d 10.0.175.60/32 -p esp -m esp --espspi 2049221724 -j WEAVE-IPSEC-IN-MARK
-A WEAVE-IPSEC-IN-MARK -j MARK --set-xmark 0x20000/0x20000
-A WEAVE-IPSEC-OUT -s 10.0.175.60/32 -d 10.0.147.13/32 -p udp -m udp --dport 6784 -j WEAVE-IPSEC-OUT-MARK
-A WEAVE-IPSEC-OUT -s 10.0.175.60/32 -d 10.0.43.134/32 -p udp -m udp --dport 6784 -j WEAVE-IPSEC-OUT-MARK
-A WEAVE-IPSEC-OUT -s 10.0.175.60/32 -d 10.0.159.195/32 -p udp -m udp --dport 6784 -j WEAVE-IPSEC-OUT-MARK
-A WEAVE-IPSEC-OUT -s 10.0.175.60/32 -d 10.0.133.82/32 -p udp -m udp --dport 6784 -j WEAVE-IPSEC-OUT-MARK
-A WEAVE-IPSEC-OUT -s 10.0.175.60/32 -d 10.0.175.253/32 -p udp -m udp --dport 6784 -j WEAVE-IPSEC-OUT-MARK
-A WEAVE-IPSEC-OUT -s 10.0.175.60/32 -d 10.0.129.132/32 -p udp -m udp --dport 6784 -j WEAVE-IPSEC-OUT-MARK
-A WEAVE-IPSEC-OUT -s 10.0.175.60/32 -d 10.0.14.0/32 -p udp -m udp --dport 6784 -j WEAVE-IPSEC-OUT-MARK
-A WEAVE-IPSEC-OUT -s 10.0.175.60/32 -d 10.0.166.79/32 -p udp -m udp --dport 6784 -j WEAVE-IPSEC-OUT-MARK                                                                                                                                                                                                                     
-A WEAVE-IPSEC-OUT -s 10.0.175.60/32 -d 10.0.143.215/32 -p udp -m udp --dport 6784 -j WEAVE-IPSEC-OUT-MARK                                                                                                                                                                                                                    
-A WEAVE-IPSEC-OUT -s 10.0.175.60/32 -d 10.0.156.17/32 -p udp -m udp --dport 6784 -j WEAVE-IPSEC-OUT-MARK                                                                                                                                                                                                                     
-A WEAVE-IPSEC-OUT -s 10.0.175.60/32 -d 10.0.18.77/32 -p udp -m udp --dport 6784 -j WEAVE-IPSEC-OUT-MARK                                                                                                                                                                                                                      
-A WEAVE-IPSEC-OUT-MARK -j MARK --set-xmark 0x20000/0x20000                                                                                                                                                                                                                                                                   
COMMIT
# Completed on Thu Dec 20 00:20:46 2018
# Generated by iptables-save v1.6.0 on Thu Dec 20 00:20:46 2018
*nat
:PREROUTING ACCEPT [1:60]
:INPUT ACCEPT [1:60]
:OUTPUT ACCEPT [1:87]
:POSTROUTING ACCEPT [1:87]
:DOCKER - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-LOAD-BALANCER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODE-PORT - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SERVICES - [0:0]
:WEAVE - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -j WEAVE
-A DOCKER -i docker0 -j RETURN
-A KUBE-FIREWALL -j KUBE-MARK-DROP
-A KUBE-LOAD-BALANCER -j KUBE-MARK-MASQ
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODE-PORT -j KUBE-MARK-MASQ
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-POSTROUTING -m comment --comment "Kubernetes endpoints dst ip:port, source ip for solving hairpin purpose" -m set --match-set KUBE-LOOP-BACK dst,dst,src -j MASQUERADE
-A KUBE-SERVICES -m comment --comment "Kubernetes nodeport TCP port for masquerade purpose" -m set --match-set KUBE-NODE-PORT-TCP dst -j KUBE-NODE-PORT
-A KUBE-SERVICES ! -s 172.16.0.0/16 -m comment --comment "Kubernetes service cluster ip + port for masquerade purpose" -m set --match-set KUBE-CLUSTER-IP dst,dst -j KUBE-MARK-MASQ
-A KUBE-SERVICES -m set --match-set KUBE-CLUSTER-IP dst,dst -j ACCEPT
-A WEAVE -s 172.16.0.0/16 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 172.16.0.0/16 -d 172.16.0.0/16 -j MASQUERADE
-A WEAVE -s 172.16.0.0/16 ! -d 172.16.0.0/16 -j MASQUERADE
COMMIT
# Completed on Thu Dec 20 00:20:46 2018
# Generated by iptables-save v1.6.0 on Thu Dec 20 00:20:46 2018
*filter
:INPUT ACCEPT [17580:4077850]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [17382:6809810]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:WEAVE-IPSEC-IN - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-EGRESS - [0:0]
:WEAVE-NPC-EGRESS-ACCEPT - [0:0]
:WEAVE-NPC-EGRESS-CUSTOM - [0:0]
:WEAVE-NPC-EGRESS-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
-A INPUT -j KUBE-FIREWALL
-A INPUT -i weave -j WEAVE-NPC-EGRESS
-A INPUT -j WEAVE-IPSEC-IN
-A FORWARD -i weave -m comment --comment "NOTE: this must go before \'-j KUBE-FORWARD\'" -j WEAVE-NPC-EGRESS
-A FORWARD -o weave -m comment --comment "NOTE: this must go before \'-j KUBE-FORWARD\'" -j WEAVE-NPC
-A FORWARD -o weave -m state --state NEW -j NFLOG --nflog-group 86
-A FORWARD -o weave -j DROP
-A FORWARD -i weave ! -o weave -j ACCEPT
-A FORWARD -o weave -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -j KUBE-FIREWALL
-A OUTPUT ! -p esp -m policy --dir out --pol none -m mark --mark 0x20000/0x20000 -j DROP
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A WEAVE-IPSEC-IN -s 10.0.147.13/32 -d 10.0.175.60/32 -p udp -m udp --dport 6784 -m mark ! --mark 0x20000/0x20000 -j DROP
-A WEAVE-IPSEC-IN -s 10.0.43.134/32 -d 10.0.175.60/32 -p udp -m udp --dport 6784 -m mark ! --mark 0x20000/0x20000 -j DROP
-A WEAVE-IPSEC-IN -s 10.0.159.195/32 -d 10.0.175.60/32 -p udp -m udp --dport 6784 -m mark ! --mark 0x20000/0x20000 -j DROP
-A WEAVE-IPSEC-IN -s 10.0.133.82/32 -d 10.0.175.60/32 -p udp -m udp --dport 6784 -m mark ! --mark 0x20000/0x20000 -j DROP
-A WEAVE-IPSEC-IN -s 10.0.175.253/32 -d 10.0.175.60/32 -p udp -m udp --dport 6784 -m mark ! --mark 0x20000/0x20000 -j DROP
-A WEAVE-IPSEC-IN -s 10.0.129.132/32 -d 10.0.175.60/32 -p udp -m udp --dport 6784 -m mark ! --mark 0x20000/0x20000 -j DROP
-A WEAVE-IPSEC-IN -s 10.0.14.0/32 -d 10.0.175.60/32 -p udp -m udp --dport 6784 -m mark ! --mark 0x20000/0x20000 -j DROP
-A WEAVE-IPSEC-IN -s 10.0.166.79/32 -d 10.0.175.60/32 -p udp -m udp --dport 6784 -m mark ! --mark 0x20000/0x20000 -j DROP
-A WEAVE-IPSEC-IN -s 10.0.143.215/32 -d 10.0.175.60/32 -p udp -m udp --dport 6784 -m mark ! --mark 0x20000/0x20000 -j DROP
-A WEAVE-IPSEC-IN -s 10.0.156.17/32 -d 10.0.175.60/32 -p udp -m udp --dport 6784 -m mark ! --mark 0x20000/0x20000 -j DROP
-A WEAVE-IPSEC-IN -s 10.0.18.77/32 -d 10.0.175.60/32 -p udp -m udp --dport 6784 -m mark ! --mark 0x20000/0x20000 -j DROP
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m physdev --physdev-out vethwe-bridge -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC-DEFAULT -m set --match-set weave-Rzff}h:=]JaaJl/G;(XJpGjZ[ dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-public" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-$pq)D6=PfzI{pR[)462_|9^3L dst -m comment --comment "DefaultAllow ingress isolation for namespace: locking" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-XSS|^E~hsABEiFQwK+N~8*/:8 dst -m comment --comment "DefaultAllow ingress isolation for namespace: or-envoy-discovery" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-{vo{hybmA6~z=2La/20Sf%UhQ dst -m comment --comment "DefaultAllow ingress isolation for namespace: sensu-test" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-d$ct(ltS]T.V6tDCW(i/j_d|m dst -m comment --comment "DefaultAllow ingress isolation for namespace: argo" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-5+[w5HEk)H]KyHWI^CdFiy;Jz dst -m comment --comment "DefaultAllow ingress isolation for namespace: jobs" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-Uq@!VeHuu102r^EJA?IQ}Vctl dst -m comment --comment "DefaultAllow ingress isolation for namespace: ingress" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-!|v58Uy$O8Twz6C^uU~S:rD|L dst -m comment --comment "DefaultAllow ingress isolation for namespace: sensu" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-uNENPyXe$bBXaPPD6mUW;*5de dst -m comment --comment "DefaultAllow ingress isolation for namespace: cluster-logging" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-z(Y[pp?{iVB0Ny/~L/DiSBt2j dst -m comment --comment "DefaultAllow ingress isolation for namespace: vault-sensu-test" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-9PcHsWx/(DvFbTqbzq*K#Xggd dst -m comment --comment "DefaultAllow ingress isolation for namespace: infra-api" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-P.B|!ZhkAr5q=XZ?3}tMBA+0 dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-system" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-tMk6Dh2:+Aq9sz3GtyPLSgQGO dst -m comment --comment "DefaultAllow ingress isolation for namespace: monitoring" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-p$64ZhTzMx;k4)+^7#~7LD8Ev dst -m comment --comment "DefaultAllow ingress isolation for namespace: alantest" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-;rGqyMIl1HN^cfDki~Z$3]6!N dst -m comment --comment "DefaultAllow ingress isolation for namespace: default" -j ACCEPT
-A WEAVE-NPC-EGRESS -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC-EGRESS -m physdev --physdev-in vethwe-bridge -j RETURN
-A WEAVE-NPC-EGRESS -d 224.0.0.0/4 -j RETURN
A WEAVE-NPC-EGRESS -m state --state NEW -j WEAVE-NPC-EGRESS-DEFAULT
-A WEAVE-NPC-EGRESS -m state --state NEW -m mark ! --mark 0x40000/0x40000 -j WEAVE-NPC-EGRESS-CUSTOM
-A WEAVE-NPC-EGRESS -m state --state NEW -m mark ! --mark 0x40000/0x40000 -j NFLOG --nflog-group 86
-A WEAVE-NPC-EGRESS -m mark ! --mark 0x40000/0x40000 -j DROP
-A WEAVE-NPC-EGRESS-ACCEPT -j MARK --set-xmark 0x40000/0x40000
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src -m comment --comment "DefaultAllow egress isolation for namespace: kube-public" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src -m comment --comment "DefaultAllow egress isolation for namespace: kube-public" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-l0I1*4c[Z#lQYUa?SQh.4|2$T src -m comment --comment "DefaultAllow egress isolation for namespace: locking" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-l0I1*4c[Z#lQYUa?SQh.4|2$T src -m comment --comment "DefaultAllow egress isolation for namespace: locking" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-}pdpQi!X%oTiQQgcHUYoTs[uV src -m comment --comment "DefaultAllow egress isolation for namespace: or-envoy-discovery" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-}pdpQi!X%oTiQQgcHUYoTs[uV src -m comment --comment "DefaultAllow egress isolation for namespace: or-envoy-discovery" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-Z/Lq4t!p_jkO@?I|Axd7[:Sui src -m comment --comment "DefaultAllow egress isolation for namespace: sensu-test" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-Z/Lq4t!p_jkO@?I|Axd7[:Sui src -m comment --comment "DefaultAllow egress isolation for namespace: sensu-test" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-Y$x4/l=nmKTS@pkzuV2c[K9[L src -m comment --comment "DefaultAllow egress isolation for namespace: argo" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-Y$x4/l=nmKTS@pkzuV2c[K9[L src -m comment --comment "DefaultAllow egress isolation for namespace: argo" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-9Fz}9|Jr2D^pPef!q[HQC@Wl2 src -m comment --comment "DefaultAllow egress isolation for namespace: jobs" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-9Fz}9|Jr2D^pPef!q[HQC@Wl2 src -m comment --comment "DefaultAllow egress isolation for namespace: jobs" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-Opja]=/bo?o~HGhlZfkve:v2= src -m comment --comment "DefaultAllow egress isolation for namespace: ingress" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-Opja]=/bo?o~HGhlZfkve:v2= src -m comment --comment "DefaultAllow egress isolation for namespace: ingress" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-p*%L5G]5*LUD$Mpq^N:UL76dT src -m comment --comment "DefaultAllow egress isolation for namespace: sensu" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-p*%L5G]5*LUD$Mpq^N:UL76dT src -m comment --comment "DefaultAllow egress isolation for namespace: sensu" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-AJS8kkHM2+Xv?sC2[}~?VG.:M src -m comment --comment "DefaultAllow egress isolation for namespace: cluster-logging" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-AJS8kkHM2+Xv?sC2[}~?VG.:M src -m comment --comment "DefaultAllow egress isolation for namespace: cluster-logging" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-I~6CGQ$4Q.cmGunStyuBa/$GW src -m comment --comment "DefaultAllow egress isolation for namespace: vault-sensu-test" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-I~6CGQ$4Q.cmGunStyuBa/$GW src -m comment --comment "DefaultAllow egress isolation for namespace: vault-sensu-test" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-w!imxqpDSMh[*~sxShy2a;B*4 src -m comment --comment "DefaultAllow egress isolation for namespace: infra-api" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-w!imxqpDSMh[*~sxShy2a;B*4 src -m comment --comment "DefaultAllow egress isolation for namespace: infra-api" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src -m comment --comment "DefaultAllow egress isolation for namespace: kube-system" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src -m comment --comment "DefaultAllow egress isolation for namespace: kube-system" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-cKT4=+;yzwiz8x@;C{fjHV6$0 src -m comment --comment "DefaultAllow egress isolation for namespace: monitoring" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-cKT4=+;yzwiz8x@;C{fjHV6$0 src -m comment --comment "DefaultAllow egress isolation for namespace: monitoring" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-F4iiB!4Hx=DV4DoPx;FCfF.I) src -m comment --comment "DefaultAllow egress isolation for namespace: alantest" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-F4iiB!4Hx=DV4DoPx;FCfF.I) src -m comment --comment "DefaultAllow egress isolation for namespace: alantest" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j RETURN
COMMIT
# Completed on Thu Dec 20 00:20:46 2018
murali-reddy commented 5 years ago

@ephur Were the nodes showing are unreachable and holding the IP's are removed from Kuberentes cluster? In the logs shared I can not find any activity related to those nodes.

ephur commented 5 years ago

When I initially looked yesterday, I thought two of them were not but I must have overlooked something, yes all of those reporting have been removed from the cluster, and replaced with other nodes during an upgrade.