weaveworks / weave

Simple, resilient multi-host containers networking and more.
https://www.weave.works
Apache License 2.0
6.62k stars 670 forks source link

Weave on one node within the cluster fails to connect to weave on other nodes. #3165

Closed renewooller closed 6 years ago

renewooller commented 7 years ago

Hi - great tool, thanks for developing it.

Is this a BUG REPORT? yes

What you expected to happen?

I expect weave running on each node to be able to connect to weave running on every other node.

What happened?

I'm running k8s via kops with two instance groups (eg apps and services) and two namespaces (eg dev and stable).

The initial symptom is that some of the applications running within the 'apps' cluster cannot connect to one of the services (nl-rmq). Other can. Looking into it, the endpoint listed by kubectl get endpoints is different to the IP address being resolved by those apps. nslookup on any of those apps that are not working shows that kubernetes.default cannot be resolved.

Then looking into weave connections show that one of the apps nodes is having trouble connecting to anything:

root@ip-10-20-52-110:/home/admin# docker exec -it 7ebce22fcde2 ./weave --local status connections
-> 10.20.53.217:6783     failed      Received update for IP range I own at 100.96.0.0 v109: incoming message says owner ba:2d:dc:a6:b6:54 v110, retry: 2017-11-10 02:14:28.200474235 +0000 UTC 
-> 10.20.55.87:6783      failed      Received update for IP range I own at 100.96.0.0 v108: incoming message says owner ba:2d:dc:a6:b6:54 v110, retry: 2017-11-10 02:10:03.511181899 +0000 UTC 
-> 10.20.61.57:6783      failed      Received update for IP range I own at 100.96.0.0 v111: incoming message says owner ba:2d:dc:a6:b6:54 v112, retry: 2017-11-10 02:17:51.439273967 +0000 UTC 
-> 10.20.40.87:6783      failed      Received update for IP range I own at 100.96.0.0 v108: incoming message says owner ba:2d:dc:a6:b6:54 v110, retry: 2017-11-10 02:10:13.61068866 +0000 UTC 
-> 10.20.52.110:6783     failed      cannot connect to ourself, retry: never 
root@ip-10-20-52-110:/home/admin#

Which explains why some apps are working and others aren't - the ones that are not working must be on the node which is having trouble connecting to the weave network.

While submitting this report I noticed that the logs for weave on the node which is not connecting to anything has "KILLED" on the last line, whereas the other weave logs don't. So my current theory is that weave has died on a node, but docker hasn't restarted it effectively. I have noticed that docker sometimes doesn't restart applications for other projects that I've worked on.

FYI, the complete 'weave report' is here

root@ip-10-20-52-110:/home/admin# docker exec -it 7ebce22fcde2 ./weave --local report
{
    "Ready": true,
    "Version": "2.0.1",
    "VersionCheck": {
        "Enabled": true,
        "Success": true,
        "NewVersion": "2.0.4",
        "NextCheckAt": "2017-11-10T03:20:12.10036294Z"
    },
    "Router": {
        "Protocol": "weave",
        "ProtocolMinVersion": 1,
        "ProtocolMaxVersion": 2,
        "Encryption": false,
        "PeerDiscovery": true,
        "Name": "72:07:06:06:16:3c",
        "NickName": "ip-10-20-52-110.us-west-1.compute.internal",
        "Port": 6783,
        "Peers": [
            {
                "Name": "72:07:06:06:16:3c",
                "NickName": "ip-10-20-52-110.us-west-1.compute.internal",
                "UID": 1470446513819984768,
                "ShortID": 1945,
                "Version": 4551,
                "Connections": null
            }
        ],
        "UnicastRoutes": [
            {
                "Dest": "72:07:06:06:16:3c",
                "Via": "00:00:00:00:00:00"
            }
        ],
        "BroadcastRoutes": [
            {
                "Source": "72:07:06:06:16:3c",
                "Via": null
            }
        ],
        "Connections": [
            {
                "Address": "10.20.52.110:6783",
                "Outbound": true,
                "State": "failed",
                "Info": "cannot connect to ourself, retry: never",
                "Attrs": null
            },
            {
                "Address": "10.20.53.217:6783",
                "Outbound": true,
                "State": "failed",
                "Info": "Received update for IP range I own at 100.96.0.0 v109: incoming message says owner ba:2d:dc:a6:b6:54 v110, retry: 2017-11-10 02:14:28.200474235 +0000 UTC",
                "Attrs": null
            },
            {
                "Address": "10.20.55.87:6783",
                "Outbound": true,
                "State": "failed",
                "Info": "Inconsistent entries for 100.96.0.0: owned by 72:07:06:06:16:3c but incoming message says ba:2d:dc:a6:b6:54, retry: 2017-11-10 02:18:34.565171551 +0000 UTC",
                "Attrs": null
            },
            {
                "Address": "10.20.61.57:6783",
                "Outbound": true,
                "State": "failed",
                "Info": "Received update for IP range I own at 100.96.0.0 v111: incoming message says owner ba:2d:dc:a6:b6:54 v112, retry: 2017-11-10 02:17:51.439273967 +0000 UTC",
                "Attrs": null
            },
            {
                "Address": "10.20.40.87:6783",
                "Outbound": true,
                "State": "failed",
                "Info": "Inconsistent entries for 100.96.0.0: owned by 72:07:06:06:16:3c but incoming message says ba:2d:dc:a6:b6:54, retry: 2017-11-10 02:16:17.031757342 +0000 UTC",
                "Attrs": null
            }
        ],
        "TerminationCount": 2247,
        "Targets": [
            "10.20.55.87",
            "10.20.61.57",
            "10.20.40.87",
            "10.20.52.110",
            "10.20.53.217"
        ],
        "OverlayDiagnostics": {
            "fastdp": {
                "Vports": [
                    {
                        "ID": 0,
                        "Name": "datapath",
                        "TypeName": "internal"
                    },
                    {
                        "ID": 1,
                        "Name": "vethwe-datapath",
                        "TypeName": "netdev"
                    },
                    {
                        "ID": 2,
                        "Name": "vxlan-6784",
                        "TypeName": "vxlan"
                    }
                ],
                "Flows": [
                    {
                        "FlowKeys": [
                            "UnknownFlowKey{type: 22, key: 00000000, mask: 00000000}",
                            "UnknownFlowKey{type: 23, key: 0000, mask: 0000}",
                            "UnknownFlowKey{type: 25, key: 00000000000000000000000000000000, mask: 00000000000000000000000000000000}",
                            "InPortFlowKey{vport: 1}",
                            "EthernetFlowKey{src: 72:07:06:06:16:3c, dst: ff:ff:ff:ff:ff:ff}",
                            "UnknownFlowKey{type: 24, key: 00000000, mask: 00000000}"
                        ],
                        "Actions": [
                            "OutputAction{vport: 0}"
                        ],
                        "Packets": 5,
                        "Bytes": 210,
                        "Used": 5174287404
                    }
                ]
            },
            "sleeve": null
        },
        "TrustedSubnets": [],
        "Interface": "datapath (via ODP)",
        "CaptureStats": {
            "FlowMisses": 10276
        },
        "MACs": [
            {
                "Mac": "3a:35:78:c1:dd:16",
                "Name": "72:07:06:06:16:3c",
                "NickName": "ip-10-20-52-110.us-west-1.compute.internal",
                "LastSeen": "2017-11-10T02:09:42.898637041Z"
            },
            {
                "Mac": "2e:19:a6:11:a5:26",
                "Name": "72:07:06:06:16:3c",
                "NickName": "ip-10-20-52-110.us-west-1.compute.internal",
                "LastSeen": "2017-11-10T02:06:29.400019073Z"
            },
            {
                "Mac": "72:36:79:96:e7:a2",
                "Name": "72:07:06:06:16:3c",
                "NickName": "ip-10-20-52-110.us-west-1.compute.internal",
                "LastSeen": "2017-11-10T02:09:04.961552851Z"
            },
            {
                "Mac": "fa:24:09:21:1e:50",
                "Name": "72:07:06:06:16:3c",
                "NickName": "ip-10-20-52-110.us-west-1.compute.internal",
                "LastSeen": "2017-11-10T02:07:41.773292147Z"
            },
            {
                "Mac": "3a:d1:74:6d:f2:fd",
                "Name": "72:07:06:06:16:3c",
                "NickName": "ip-10-20-52-110.us-west-1.compute.internal",
                "LastSeen": "2017-11-10T02:06:42.288540538Z"
            },
            {
                "Mac": "c2:25:68:f1:3d:a7",
                "Name": "72:07:06:06:16:3c",
                "NickName": "ip-10-20-52-110.us-west-1.compute.internal",
                "LastSeen": "2017-11-10T02:07:45.283861303Z"
            },
            {
                "Mac": "b6:7a:70:32:c9:66",
                "Name": "72:07:06:06:16:3c",
                "NickName": "ip-10-20-52-110.us-west-1.compute.internal",
                "LastSeen": "2017-11-10T02:08:26.912862029Z"
            },
            {
                "Mac": "1e:88:3a:9b:0e:16",
                "Name": "72:07:06:06:16:3c",
                "NickName": "ip-10-20-52-110.us-west-1.compute.internal",
                "LastSeen": "2017-11-10T02:10:08.373468709Z"
            },
            {
                "Mac": "96:0f:22:f4:e1:ab",
                "Name": "72:07:06:06:16:3c",
                "NickName": "ip-10-20-52-110.us-west-1.compute.internal",
                "LastSeen": "2017-11-10T02:06:38.415710708Z"
            },
            {
                "Mac": "5e:fd:8a:96:cb:6c",
                "Name": "72:07:06:06:16:3c",
                "NickName": "ip-10-20-52-110.us-west-1.compute.internal",
                "LastSeen": "2017-11-10T02:05:18.201950783Z"
            },
            {
                "Mac": "72:07:06:06:16:3c",
                "Name": "72:07:06:06:16:3c",
                "NickName": "ip-10-20-52-110.us-west-1.compute.internal",
                "LastSeen": "2017-11-10T02:10:06.312314207Z"
            },
            {
                "Mac": "f2:de:67:7c:b9:f1",
                "Name": "72:07:06:06:16:3c",
                "NickName": "ip-10-20-52-110.us-west-1.compute.internal",
                "LastSeen": "2017-11-10T02:07:38.165868939Z"
            },
            {
                "Mac": "12:74:3c:3a:20:1e",
                "Name": "72:07:06:06:16:3c",
                "NickName": "ip-10-20-52-110.us-west-1.compute.internal",
                "LastSeen": "2017-11-10T02:08:55.624559842Z"
            },
            {
                "Mac": "ea:d6:45:d4:e0:b2",
                "Name": "72:07:06:06:16:3c",
                "NickName": "ip-10-20-52-110.us-west-1.compute.internal",
                "LastSeen": "2017-11-10T02:08:21.318234673Z"
            },
            {
                "Mac": "d6:80:6b:eb:87:b6",
                "Name": "72:07:06:06:16:3c",
                "NickName": "ip-10-20-52-110.us-west-1.compute.internal",
                "LastSeen": "2017-11-10T02:10:08.373861079Z"
            },
            {
                "Mac": "9e:c8:60:03:b5:08",
                "Name": "72:07:06:06:16:3c",
                "NickName": "ip-10-20-52-110.us-west-1.compute.internal",
                "LastSeen": "2017-11-10T02:09:47.367737852Z"
            },
            {
                "Mac": "0e:52:05:4e:46:ef",
                "Name": "72:07:06:06:16:3c",
                "NickName": "ip-10-20-52-110.us-west-1.compute.internal",
                "LastSeen": "2017-11-10T02:08:53.236114159Z"
            },
            {
                "Mac": "4e:e8:db:55:0b:dc",
                "Name": "72:07:06:06:16:3c",
                "NickName": "ip-10-20-52-110.us-west-1.compute.internal",
                "LastSeen": "2017-11-10T02:10:53.760370024Z"
            }
        ]
    },
    "IPAM": {
        "Paxos": null,
        "Range": "100.96.0.0/11",
        "RangeNumIPs": 2097152,
        "ActiveIPs": 22,
        "DefaultSubnet": "100.96.0.0/11",
        "Entries": [
            {
                "Token": "100.96.0.0",
                "Size": 262144,
                "Peer": "72:07:06:06:16:3c",
                "Nickname": "ip-10-20-52-110.us-west-1.compute.internal",
                "IsKnownPeer": true,
                "Version": 112
            },
            {
                "Token": "100.100.0.0",
                "Size": 262144,
                "Peer": "72:07:06:06:16:3c",
                "Nickname": "ip-10-20-52-110.us-west-1.compute.internal",
                "IsKnownPeer": true,
                "Version": 19
            },
            {
                "Token": "100.104.0.0",
                "Size": 262144,
                "Peer": "72:07:06:06:16:3c",
                "Nickname": "ip-10-20-52-110.us-west-1.compute.internal",
                "IsKnownPeer": true,
                "Version": 1
            },
            {
                "Token": "100.108.0.0",
                "Size": 262144,
                "Peer": "72:07:06:06:16:3c",
                "Nickname": "ip-10-20-52-110.us-west-1.compute.internal",
                "IsKnownPeer": true,
                "Version": 25
            },
            {
                "Token": "100.112.0.0",
                "Size": 262144,
                "Peer": "72:07:06:06:16:3c",
                "Nickname": "ip-10-20-52-110.us-west-1.compute.internal",
                "IsKnownPeer": true,
                "Version": 7
            },
            {
                "Token": "100.116.0.0",
                "Size": 65536,
                "Peer": "72:07:06:06:16:3c",
                "Nickname": "ip-10-20-52-110.us-west-1.compute.internal",
                "IsKnownPeer": true,
                "Version": 82
            },
            {
                "Token": "100.117.0.0",
                "Size": 16384,
                "Peer": "72:07:06:06:16:3c",
                "Nickname": "ip-10-20-52-110.us-west-1.compute.internal",
                "IsKnownPeer": true,
                "Version": 20
            },
            {
                "Token": "100.117.64.0",
                "Size": 4096,
                "Peer": "a6:e9:b1:63:0c:60",
                "Nickname": "ip-10-20-53-217.us-west-1.compute.internal",
                "IsKnownPeer": false,
                "Version": 8
            },
            {
                "Token": "100.117.80.0",
                "Size": 3072,
                "Peer": "72:07:06:06:16:3c",
                "Nickname": "ip-10-20-52-110.us-west-1.compute.internal",
                "IsKnownPeer": true,
                "Version": 282
            },
            {
                "Token": "100.117.92.0",
                "Size": 1024,
                "Peer": "9a:f1:cb:a7:f9:87",
                "Nickname": "ip-10-20-61-57.us-west-1.compute.internal",
                "IsKnownPeer": false,
                "Version": 1
            },
            {
                "Token": "100.117.96.0",
                "Size": 4096,
                "Peer": "a6:e9:b1:63:0c:60",
                "Nickname": "ip-10-20-53-217.us-west-1.compute.internal",
                "IsKnownPeer": false,
                "Version": 0
            },
            {
                "Token": "100.117.112.0",
                "Size": 4096,
                "Peer": "72:07:06:06:16:3c",
                "Nickname": "ip-10-20-52-110.us-west-1.compute.internal",
                "IsKnownPeer": true,
                "Version": 98
            },
            {
                "Token": "100.117.128.0",
                "Size": 16384,
                "Peer": "72:07:06:06:16:3c",
                "Nickname": "ip-10-20-52-110.us-west-1.compute.internal",
                "IsKnownPeer": true,
                "Version": 1
            },
            {
                "Token": "100.117.192.0",
                "Size": 12288,
                "Peer": "72:07:06:06:16:3c",
                "Nickname": "ip-10-20-52-110.us-west-1.compute.internal",
                "IsKnownPeer": true,
                "Version": 44
            },
            {
                "Token": "100.117.240.0",
                "Size": 4096,
                "Peer": "e2:cd:be:ab:a6:d1",
                "Nickname": "ip-10-20-40-87.us-west-1.compute.internal",
                "IsKnownPeer": false,
                "Version": 16
            },
            {
                "Token": "100.118.0.0",
                "Size": 65536,
                "Peer": "72:07:06:06:16:3c",
                "Nickname": "ip-10-20-52-110.us-west-1.compute.internal",
                "IsKnownPeer": true,
                "Version": 1
            },
            {
                "Token": "100.119.0.0",
                "Size": 49152,
                "Peer": "72:07:06:06:16:3c",
                "Nickname": "ip-10-20-52-110.us-west-1.compute.internal",
                "IsKnownPeer": true,
                "Version": 6
            },
            {
                "Token": "100.119.192.0",
                "Size": 16384,
                "Peer": "72:07:06:06:16:3c",
                "Nickname": "ip-10-20-52-110.us-west-1.compute.internal",
                "IsKnownPeer": true,
                "Version": 4
            },
            {
                "Token": "100.120.0.0",
                "Size": 131072,
                "Peer": "72:07:06:06:16:3c",
                "Nickname": "ip-10-20-52-110.us-west-1.compute.internal",
                "IsKnownPeer": true,
                "Version": 2
            },
            {
                "Token": "100.122.0.0",
                "Size": 131072,
                "Peer": "72:07:06:06:16:3c",
                "Nickname": "ip-10-20-52-110.us-west-1.compute.internal",
                "IsKnownPeer": true,
                "Version": 207
            },
            {
                "Token": "100.124.0.0",
                "Size": 262144,
                "Peer": "72:07:06:06:16:3c",
                "Nickname": "ip-10-20-52-110.us-west-1.compute.internal",
                "IsKnownPeer": true,
                "Version": 34
            }
        ],
        "PendingClaims": null,
        "PendingAllocates": null
    }
}

How to reproduce it?

This is difficult to reproduce - it happens occasionally, but when I reboot, it will work again for a while.

Anything else we need to know?

Running on AWS, using kops to set up two instance groups and two namespaces.

Versions:

$ weave version
root@ip-10-20-52-110:/home/admin# docker exec -it 7ebce22fcde2 ./weave version
weave script 2.0.1
$ docker version
admin@ip-10-20-52-110:~$ docker --version
Docker version 1.12.6, build 78d1802
$ uname -a
admin@ip-10-20-52-110:~$ uname -a
Linux ip-10-20-52-110 4.4.78-k8s #1 SMP Fri Jul 28 01:28:39 UTC 2017 x86_64 GNU/Linux
$ kubectl version
dev[~] : kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.1", GitCommit:"1dc5c66f5dd61da08412a74221ecc79208c2165b", GitTreeState:"clean", BuildDate:"2017-07-14T02:00:46Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T08:08:00Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Logs:

or, if using Kubernetes:

$ kubectl logs -n kube-system <weave-net-pod> weave
On the 'bad' node it is:

dev[~] : kubectl -n kube-system logs --tail=50 -p weave-net-5r52m weave 
INFO: 2017/11/09 08:25:09.791462 ->[10.20.61.57:45119|9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)]: connection deleted
INFO: 2017/11/09 08:25:09.792779 ->[10.20.61.57:6783] attempting connection
INFO: 2017/11/09 08:25:09.872083 ->[10.20.61.57:49453] connection accepted
INFO: 2017/11/09 08:25:09.872667 ->[10.20.61.57:6783|9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)]: connection ready; using protocol version 2
INFO: 2017/11/09 08:25:09.872719 overlay_switch ->[9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)] using fastdp
INFO: 2017/11/09 08:25:09.872740 ->[10.20.61.57:6783|9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)]: connection added
INFO: 2017/11/09 08:25:09.873848 ->[10.20.61.57:49453|9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)]: connection ready; using protocol version 2
INFO: 2017/11/09 08:25:09.873938 overlay_switch ->[9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)] using fastdp
INFO: 2017/11/09 08:25:09.873954 ->[10.20.61.57:6783|9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)]: connection deleted
INFO: 2017/11/09 08:25:09.873996 ->[10.20.61.57:49453|9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)]: connection added
INFO: 2017/11/09 08:25:09.874071 ->[10.20.61.57:6783|9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)]: connection shutting down due to error: Multiple connections to 9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal) added to 72:07:06:06:16:3c(ip-10-20-52-110.us-west-1.compute.internal)
INFO: 2017/11/09 08:25:09.892937 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2017/11/09 08:25:09.893023 overlay_switch ->[9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)] using sleeve
INFO: 2017/11/09 08:25:09.893044 ->[10.20.61.57:49453|9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)]: connection fully established
INFO: 2017/11/09 08:25:09.894249 sleeve ->[10.20.61.57:6783|9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)]: Effective MTU verified at 8939
INFO: 2017/11/09 08:25:10.382102 overlay_switch ->[9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)] using fastdp
INFO: 2017/11/09 08:25:21.665283 ->[10.20.40.87:6783|e2:cd:be:ab:a6:d1(ip-10-20-40-87.us-west-1.compute.internal)]: connection shutting down due to error: Received update for IP range I own at 100.117.80.0 v282: incoming message says owner ba:2d:dc:a6:b6:54 v347
INFO: 2017/11/09 08:25:21.665394 ->[10.20.40.87:6783|e2:cd:be:ab:a6:d1(ip-10-20-40-87.us-west-1.compute.internal)]: connection deleted
INFO: 2017/11/09 08:25:21.666149 ->[10.20.40.87:6783] attempting connection
INFO: 2017/11/09 08:25:21.765540 ->[10.20.40.87:59932] connection accepted
INFO: 2017/11/09 08:25:21.805880 ->[10.20.40.87:6783|e2:cd:be:ab:a6:d1(ip-10-20-40-87.us-west-1.compute.internal)]: connection ready; using protocol version 2
INFO: 2017/11/09 08:25:21.805948 overlay_switch ->[e2:cd:be:ab:a6:d1(ip-10-20-40-87.us-west-1.compute.internal)] using fastdp
INFO: 2017/11/09 08:25:21.805967 ->[10.20.40.87:6783|e2:cd:be:ab:a6:d1(ip-10-20-40-87.us-west-1.compute.internal)]: connection added
INFO: 2017/11/09 08:25:21.806588 ->[10.20.40.87:59932|e2:cd:be:ab:a6:d1(ip-10-20-40-87.us-west-1.compute.internal)]: connection ready; using protocol version 2
INFO: 2017/11/09 08:25:21.806628 overlay_switch ->[e2:cd:be:ab:a6:d1(ip-10-20-40-87.us-west-1.compute.internal)] using fastdp
INFO: 2017/11/09 08:25:21.806646 ->[10.20.40.87:59932|e2:cd:be:ab:a6:d1(ip-10-20-40-87.us-west-1.compute.internal)]: connection shutting down due to error: Multiple connections to e2:cd:be:ab:a6:d1(ip-10-20-40-87.us-west-1.compute.internal) added to 72:07:06:06:16:3c(ip-10-20-52-110.us-west-1.compute.internal)
INFO: 2017/11/09 08:25:22.337257 ->[10.20.40.87:6783|e2:cd:be:ab:a6:d1(ip-10-20-40-87.us-west-1.compute.internal)]: connection fully established
INFO: 2017/11/09 08:25:22.356771 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2017/11/09 08:25:22.389047 sleeve ->[10.20.40.87:6783|e2:cd:be:ab:a6:d1(ip-10-20-40-87.us-west-1.compute.internal)]: Effective MTU verified at 8939
INFO: 2017/11/09 08:25:27.271991 ->[10.20.53.217:6783|a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)]: connection shutting down due to error: Received update for IP range I own at 100.117.80.0 v282: incoming message says owner ba:2d:dc:a6:b6:54 v347
INFO: 2017/11/09 08:25:27.272092 ->[10.20.53.217:6783|a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)]: connection deleted
INFO: 2017/11/09 08:25:27.398173 ->[10.20.53.217:57562] connection accepted
INFO: 2017/11/09 08:25:27.450109 ->[10.20.53.217:57562|a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)]: connection ready; using protocol version 2
INFO: 2017/11/09 08:25:27.450186 overlay_switch ->[a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)] using fastdp
INFO: 2017/11/09 08:25:27.450213 ->[10.20.53.217:57562|a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)]: connection added
INFO: 2017/11/09 08:25:27.531341 ->[10.20.53.217:57562|a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)]: connection fully established
INFO: 2017/11/09 08:25:28.021528 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2017/11/09 08:25:28.122828 sleeve ->[10.20.53.217:6783|a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)]: Effective MTU verified at 8939
INFO: 2017/11/09 08:25:39.777949 ->[10.20.61.57:49453|9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)]: connection shutting down due to error: Received update for IP range I own at 100.117.80.0 v282: incoming message says owner ba:2d:dc:a6:b6:54 v347
INFO: 2017/11/09 08:25:39.778071 ->[10.20.61.57:49453|9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)]: connection deleted
INFO: 2017/11/09 08:25:39.879253 ->[10.20.61.57:6783] attempting connection
INFO: 2017/11/09 08:25:39.899160 ->[10.20.61.57:6783|9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)]: connection ready; using protocol version 2
INFO: 2017/11/09 08:25:39.899265 overlay_switch ->[9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)] using fastdp
INFO: 2017/11/09 08:25:39.899297 ->[10.20.61.57:6783|9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)]: connection added
INFO: 2017/11/09 08:25:39.942346 EMSGSIZE on send, expecting PMTU update (IP packet was 60028 bytes, payload was 60020 bytes)
INFO: 2017/11/09 08:25:39.942483 overlay_switch ->[9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)] using sleeve
INFO: 2017/11/09 08:25:39.942508 ->[10.20.61.57:6783|9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)]: connection fully established
INFO: 2017/11/09 08:25:39.959192 overlay_switch ->[9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)] using fastdp
INFO: 2017/11/09 08:25:40.081101 sleeve ->[10.20.61.57:6783|9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)]: Effective MTU verified at 8939
Killed

On the other node it is:

dev[~] : kubectl -n kube-system logs --tail=50 -p weave-net-268xw weave 
INFO: 2017/11/06 05:11:35.801663 overlay_switch ->[a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)] using fastdp
INFO: 2017/11/06 05:11:35.801684 ->[10.20.53.217:56563|a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)]: connection added (new peer)
INFO: 2017/11/06 05:11:35.801784 ->[10.20.53.217:56563|a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)]: connection shutting down due to error: read tcp4 10.20.55.87:6783->10.20.53.217:56563: read: connection reset by peer
INFO: 2017/11/06 05:11:35.801810 ->[10.20.53.217:56563|a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)]: connection deleted
INFO: 2017/11/06 05:11:35.801819 Removed unreachable peer a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)
INFO: 2017/11/06 05:11:45.167553 ->[10.20.53.217:6783] attempting connection
INFO: 2017/11/06 05:11:45.448088 ->[10.20.53.217:6783|a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)]: connection ready; using protocol version 2
INFO: 2017/11/06 05:11:45.448158 overlay_switch ->[a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)] using fastdp
INFO: 2017/11/06 05:11:45.448183 ->[10.20.53.217:6783|a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)]: connection added (new peer)
INFO: 2017/11/06 05:11:45.846224 ->[10.20.52.110:33273] connection accepted
INFO: 2017/11/06 05:11:45.989033 ->[10.20.53.217:6783|a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)]: connection shutting down due to error: Received update for IP range I own at 100.117.80.0 v280: incoming message says owner 72:07:06:06:16:3c v282
INFO: 2017/11/06 05:11:45.989178 ->[10.20.53.217:6783|a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)]: connection deleted
INFO: 2017/11/06 05:11:45.989198 Removed unreachable peer 72:07:06:06:16:3c(ip-10-20-52-110.us-west-1.compute.internal)
INFO: 2017/11/06 05:11:45.989205 Removed unreachable peer e2:cd:be:ab:a6:d1(ip-10-20-40-87.us-west-1.compute.internal)
INFO: 2017/11/06 05:11:45.989210 Removed unreachable peer a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)
INFO: 2017/11/06 05:11:45.989216 Removed unreachable peer 9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)
INFO: 2017/11/06 05:11:45.989575 ->[10.20.52.110:33273|72:07:06:06:16:3c(ip-10-20-52-110.us-west-1.compute.internal)]: connection ready; using protocol version 2
INFO: 2017/11/06 05:11:45.989632 overlay_switch ->[72:07:06:06:16:3c(ip-10-20-52-110.us-west-1.compute.internal)] using fastdp
INFO: 2017/11/06 05:11:45.989649 ->[10.20.52.110:33273|72:07:06:06:16:3c(ip-10-20-52-110.us-west-1.compute.internal)]: connection added (new peer)
INFO: 2017/11/06 05:11:46.505328 ->[10.20.52.110:33273|72:07:06:06:16:3c(ip-10-20-52-110.us-west-1.compute.internal)]: connection shutting down due to error: read tcp4 10.20.55.87:6783->10.20.52.110:33273: read: connection reset by peer
INFO: 2017/11/06 05:11:46.505493 ->[10.20.52.110:33273|72:07:06:06:16:3c(ip-10-20-52-110.us-west-1.compute.internal)]: connection deleted
INFO: 2017/11/06 05:11:46.505519 Removed unreachable peer 72:07:06:06:16:3c(ip-10-20-52-110.us-west-1.compute.internal)
INFO: 2017/11/06 05:11:48.494273 Discovered local MAC 7e:f3:74:a9:30:84
INFO: 2017/11/06 05:12:07.264884 ->[10.20.52.110:6783] attempting connection
INFO: 2017/11/06 05:12:07.471203 ->[10.20.61.57:6783] attempting connection
INFO: 2017/11/06 05:12:07.557285 ->[10.20.61.57:6783|9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)]: connection ready; using protocol version 2
INFO: 2017/11/06 05:12:07.557399 overlay_switch ->[9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)] using fastdp
INFO: 2017/11/06 05:12:07.557456 ->[10.20.61.57:6783|9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)]: connection added (new peer)
INFO: 2017/11/06 05:12:07.558784 ->[10.20.61.57:6783|9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)]: connection shutting down due to error: Received update for IP range I own at 100.117.80.0 v280: incoming message says owner 72:07:06:06:16:3c v282
INFO: 2017/11/06 05:12:07.558934 ->[10.20.61.57:6783|9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)]: connection deleted
INFO: 2017/11/06 05:12:07.558970 Removed unreachable peer 9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)
INFO: 2017/11/06 05:12:07.558986 Removed unreachable peer e2:cd:be:ab:a6:d1(ip-10-20-40-87.us-west-1.compute.internal)
INFO: 2017/11/06 05:12:07.559000 Removed unreachable peer a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)
INFO: 2017/11/06 05:12:07.559013 Removed unreachable peer 72:07:06:06:16:3c(ip-10-20-52-110.us-west-1.compute.internal)
INFO: 2017/11/06 05:12:07.598062 ->[10.20.52.110:6783|72:07:06:06:16:3c(ip-10-20-52-110.us-west-1.compute.internal)]: connection ready; using protocol version 2
INFO: 2017/11/06 05:12:07.598224 overlay_switch ->[72:07:06:06:16:3c(ip-10-20-52-110.us-west-1.compute.internal)] using fastdp
INFO: 2017/11/06 05:12:07.598344 ->[10.20.52.110:6783|72:07:06:06:16:3c(ip-10-20-52-110.us-west-1.compute.internal)]: connection added (new peer)
INFO: 2017/11/06 05:12:07.599366 ->[10.20.52.110:6783|72:07:06:06:16:3c(ip-10-20-52-110.us-west-1.compute.internal)]: connection shutting down due to error: Received update for IP range I own at 100.117.80.0 v280: incoming message says owner 72:07:06:06:16:3c v282
INFO: 2017/11/06 05:12:07.599499 ->[10.20.52.110:6783|72:07:06:06:16:3c(ip-10-20-52-110.us-west-1.compute.internal)]: connection deleted
INFO: 2017/11/06 05:12:07.599535 Removed unreachable peer 72:07:06:06:16:3c(ip-10-20-52-110.us-west-1.compute.internal)
INFO: 2017/11/06 05:12:07.599559 Removed unreachable peer 9a:f1:cb:a7:f9:87(ip-10-20-61-57.us-west-1.compute.internal)
INFO: 2017/11/06 05:12:07.599577 Removed unreachable peer e2:cd:be:ab:a6:d1(ip-10-20-40-87.us-west-1.compute.internal)
INFO: 2017/11/06 05:12:07.599595 Removed unreachable peer a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)
INFO: 2017/11/06 05:12:07.654424 ->[10.20.53.217:55149] connection accepted
INFO: 2017/11/06 05:12:07.735839 ->[10.20.53.217:55149|a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)]: connection ready; using protocol version 2
INFO: 2017/11/06 05:12:07.736150 overlay_switch ->[a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)] using fastdp
INFO: 2017/11/06 05:12:07.857132 ->[10.20.53.217:55149|a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)]: connection added (new peer)
INFO: 2017/11/06 05:12:07.857322 ->[10.20.53.217:55149|a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)]: connection shutting down due to error: read tcp4 10.20.55.87:6783->10.20.53.217:55149: read: connection reset by peer
INFO: 2017/11/06 05:12:07.857354 ->[10.20.53.217:55149|a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)]: connection deleted
INFO: 2017/11/06 05:12:07.857366 Removed unreachable peer a6:e9:b1:63:0c:60(ip-10-20-53-217.us-west-1.compute.internal)

Note that the other node isn't reporting 'KILLED'  - perhaps weave-net has been killed without being restarted - could be a problem with docker not restarting effectively. 

Network:

$ ip route
$ ip -4 -o addr
$ sudo iptables-save
renewooller commented 7 years ago

Update killing the weave pods and the app pods doesn't solve the problem.

After terminating all of the instances of the two clusters in AWS, the newly created instances do not show the same problems.

josiahjohnston commented 6 years ago

I'm also encountering this issue, and can't even figure out what tainted our AWS instances. Rebuilding on new amazon VMs didn't work for me though, even though I followed my notes exactly, which previously worked.

brb commented 6 years ago

@josiahjohnston Could you provide more info:

rade commented 6 years ago

Seems like we've reached a dead end in the investigation here. -> closing

wendorf commented 6 years ago

I'm also seeing this issue using weave 2.3.0, kops 1.9.0, and k8s 1.9.7 on AWS. 3 master instance groups, 3 worker instance groups, for a total of 73 nodes.

I'm getting pods not being able to schedule for a couple reasons:

 Normal   SandboxChanged          38m (x20 over 1h)  kubelet, ip-10-79-162-5.ec2.internal  Pod sandbox changed, it will be killed and re-created.
  Warning  FailedCreatePodSandBox  1m (x32 over 1h)   kubelet, ip-10-79-162-5.ec2.internal  Failed create pod sandbox.
  Normal   SuccessfulMountVolume   39m                kubelet, ip-10-79-140-113.ec2.internal  MountVolume.SetUp succeeded for volume "default-token-08nxm"
  Warning  NetworkNotReady         39m (x2 over 39m)  kubelet, ip-10-79-140-113.ec2.internal  network is not ready: [runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized]
  Normal   SandboxChanged          4m (x11 over 35m)  kubelet, ip-10-79-140-113.ec2.internal  Pod sandbox changed, it will be killed and re-created.
  Warning  FailedCreatePodSandBox  0s (x12 over 35m)  kubelet, ip-10-79-140-113.ec2.internal  Failed create pod sandbox.

Weave status:

→ kubectl exec -it weave-net-s4zhd -nkube-system /bin/sh
Defaulting container name to weave.
Use 'kubectl describe pod/weave-net-s4zhd -n kube-system' to see all of the containers in this pod.
/home/weave # ./weave --local status connections
-> 10.79.140.39:6783     established fastdp 4a:81:3f:63:ec:c9(ip-10-79-140-39.ec2.internal) mtu=8912
[TRIMMED]
-> 10.79.143.166:6783    established fastdp e2:ba:12:25:9e:78(ip-10-79-143-166.ec2.internal) mtu=8912
-> 10.79.139.100:6783    failed      Inconsistent entries for 100.96.4.8: owned by 02:09:c4:42:ee:96 but incoming message says c6:31:94:28:68:8a, retry: 2018-04-28 18:18:57.659632392 +0000 UTC m=+627.531238776
-> 10.79.140.113:6783    failed      cannot connect to ourself, retry: never
-> 10.79.128.126:6783    failed      Inconsistent entries for 100.96.4.8: owned by 02:09:c4:42:ee:96 but incoming message says c6:31:94:28:68:8a, retry: 2018-04-28 18:19:02.37361639 +0000 UTC m=+632.245222487
-> 10.79.155.218:6783    failed      Inconsistent entries for 100.96.4.8: owned by 02:09:c4:42:ee:96 but incoming message says c6:31:94:28:68:8a, retry: 2018-04-28 18:17:25.50500513 +0000 UTC m=+535.376611601
-> 10.79.139.70:6783     failed      Inconsistent entries for 100.96.4.8: owned by 02:09:c4:42:ee:96 but incoming message says c6:31:94:28:68:8a, retry: 2018-04-28 18:17:58.071164239 +0000 UTC m=+567.942770608
-> 10.79.147.75:6783     failed      Inconsistent entries for 100.96.4.8: owned by 02:09:c4:42:ee:96 but incoming message says c6:31:94:28:68:8a, retry: 2018-04-28 18:20:03.606968196 +0000 UTC m=+693.478574576
brb commented 6 years ago

@wendorf Can you please provide logs of the following weave containers:

zacblazic commented 6 years ago

We've also been encountering this issue recently.

Our clusters are running the following:

Over-claimng

It seems that two nodes have (or are attempting to) claim the same IP address range.

The node ip-10-83-42-111.ec2.internal claimed the 100.105.68.0 IP range:

...
{
    "Token": "100.105.68.0",
    "Size": 1024,
    "Peer": "12:84:ba:e1:47:79",
    "Nickname": "ip-10-83-42-111.ec2.internal",
    "IsKnownPeer": true,
    "Version": 1451
}
...

However, the node ip-10-83-54-199.ec2.internal is also attempting to claim the 100.105.68.0 IP range:

-> 10.83.80.134:6783     failed      Received update for IP range I own at 100.105.68.0 v592: incoming message says owner 12:84:ba:e1:47:79 v1460, retry: 2018-06-05 11:15:31.000788717 +0000 UTC m=+1034.303622109

This issue was raised a while back https://github.com/weaveworks/weave/issues/3190 and was apparently fixed by https://github.com/weaveworks/weave/pull/3192.

Restarting the weave pods on either ip-10-83-42-111.ec2.internal or ip-10-83-54-199.ec2.internal does not change the situation.

Non-existent peer

Not sure if related but all nodes appear to be attempting to remove a non-existent peer:

DEBU: 2018/06/05 11:20:29.571705 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}]
DEBU: 2018/06/05 11:20:29.571731 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}
DEBU: 2018/06/05 11:20:29.571739 [kube-peers] Existing annotation 46:f4:4b:41:dd:11

This error started on 2018/05/16 (~20 days ago). Around this time we were having issues with weave consuming a large amount of memory.

Diagnostics

From the node with the issue (ip-10-83-54-199.ec2.internal):

Weave IPAM

```console $ ./weave --local status ipam 6a:5c:7c:af:e3:2e(ip-10-83-54-199.ec2.internal) 297040 IPs (14.2% of total) (4 active) 46:f4:4b:41:dd:11(ip-10-83-124-112.ec2.internal) 65536 IPs (03.1% of total) - unreachable! fe:6a:75:3b:98:1f(ip-10-83-123-141.ec2.internal) 8192 IPs (00.4% of total) - unreachable! 7a:54:95:96:77:8e(ip-10-83-98-15.ec2.internal) 65536 IPs (03.1% of total) - unreachable! c2:35:ec:05:d8:a5() 65536 IPs (03.1% of total) - unreachable! 1a:cd:c1:53:24:bf(ip-10-83-36-17.ec2.internal) 16384 IPs (00.8% of total) - unreachable! 1e:dd:1b:3a:d3:11(ip-10-83-109-170.ec2.internal) 65536 IPs (03.1% of total) - unreachable! 3e:06:82:a1:9c:ec(ip-10-83-103-30.ec2.internal) 66048 IPs (03.1% of total) - unreachable! 1e:1c:57:4a:9e:66(ip-10-83-52-111.ec2.internal) 8192 IPs (00.4% of total) - unreachable! fe:9a:a2:ec:04:20(ip-10-83-117-100.ec2.internal) 65536 IPs (03.1% of total) - unreachable! 2e:60:25:ef:21:f3(ip-10-83-125-194.ec2.internal) 32768 IPs (01.6% of total) - unreachable! 22:e1:23:b2:2f:8c(ip-10-83-59-237.ec2.internal) 2048 IPs (00.1% of total) - unreachable! 92:6b:5f:82:d0:24(ip-10-83-41-63.ec2.internal) 98304 IPs (04.7% of total) - unreachable! 6e:89:89:29:40:32(ip-10-83-98-46.ec2.internal) 16384 IPs (00.8% of total) - unreachable! fa:ea:3e:94:41:e0(ip-10-83-85-183.ec2.internal) 16384 IPs (00.8% of total) - unreachable! 16:67:3c:ec:79:79(ip-10-83-47-111.ec2.internal) 16384 IPs (00.8% of total) - unreachable! ba:f1:81:93:ab:b4(ip-10-83-32-140.ec2.internal) 32768 IPs (01.6% of total) - unreachable! 8a:9b:87:49:9d:5b(ip-10-83-67-78.ec2.internal) 8192 IPs (00.4% of total) - unreachable! 9a:55:19:b7:97:7c(ip-10-83-76-130.ec2.internal) 2048 IPs (00.1% of total) - unreachable! f2:39:2f:d4:16:d7(ip-10-83-72-203.ec2.internal) 30720 IPs (01.5% of total) - unreachable! ee:c9:d9:9b:bd:ec(ip-10-83-78-215.ec2.internal) 4608 IPs (00.2% of total) - unreachable! 9e:d3:fb:04:82:af(ip-10-83-109-104.ec2.internal) 16384 IPs (00.8% of total) - unreachable! 7e:44:ff:00:f5:f9(ip-10-83-60-42.ec2.internal) 4096 IPs (00.2% of total) - unreachable! 8e:71:01:6f:cd:9f(ip-10-83-73-41.ec2.internal) 16384 IPs (00.8% of total) - unreachable! 86:a9:66:cb:4a:c6(ip-10-83-85-200.ec2.internal) 16384 IPs (00.8% of total) - unreachable! f6:b4:71:ad:ba:22(ip-10-83-108-91.ec2.internal) 65536 IPs (03.1% of total) - unreachable! f2:11:ce:93:da:0b(ip-10-83-34-95.ec2.internal) 32768 IPs (01.6% of total) - unreachable! 0e:8c:f8:33:67:7f(ip-10-83-87-92.ec2.internal) 16384 IPs (00.8% of total) - unreachable! 6e:1b:67:ca:e9:72(ip-10-83-101-85.ec2.internal) 65536 IPs (03.1% of total) - unreachable! 7a:fb:6b:30:aa:2a(ip-10-83-106-172.ec2.internal) 32768 IPs (01.6% of total) - unreachable! 72:bf:6c:d7:7a:03(ip-10-83-85-193.ec2.internal) 32768 IPs (01.6% of total) - unreachable! 3a:47:b8:a9:34:ef(ip-10-83-126-221.ec2.internal) 19456 IPs (00.9% of total) - unreachable! 3a:5f:b6:6e:dd:bb(ip-10-83-80-134.ec2.internal) 15360 IPs (00.7% of total) - unreachable! fe:f3:20:f4:5a:52(ip-10-83-60-125.ec2.internal) 65536 IPs (03.1% of total) - unreachable! 12:ea:66:1b:70:3d(ip-10-83-38-5.ec2.internal) 8192 IPs (00.4% of total) - unreachable! ae:9e:35:14:6c:0c(ip-10-83-109-171.ec2.internal) 32768 IPs (01.6% of total) - unreachable! 86:d8:10:87:a0:b1(ip-10-83-126-77.ec2.internal) 32768 IPs (01.6% of total) - unreachable! 26:00:36:0b:01:55(ip-10-83-51-193.ec2.internal) 32768 IPs (01.6% of total) - unreachable! 92:55:13:0d:72:f7(ip-10-83-96-158.ec2.internal) 2048 IPs (00.1% of total) - unreachable! 66:15:ce:c7:af:ab(ip-10-83-96-156.ec2.internal) 131072 IPs (06.2% of total) - unreachable! 9a:5d:d4:e9:78:29(ip-10-83-109-188.ec2.internal) 1024 IPs (00.0% of total) - unreachable! 8e:02:27:40:f8:fd(ip-10-83-106-134.ec2.internal) 15872 IPs (00.8% of total) - unreachable! 36:35:70:69:df:eb(ip-10-83-50-197.ec2.internal) 32768 IPs (01.6% of total) - unreachable! 4a:d5:9f:aa:e3:99(ip-10-83-124-188.ec2.internal) 8192 IPs (00.4% of total) - unreachable! 02:77:dd:50:7b:3a(ip-10-83-63-111.ec2.internal) 32768 IPs (01.6% of total) - unreachable! 16:d8:40:86:d6:bf(ip-10-83-41-74.ec2.internal) 6576 IPs (00.3% of total) - unreachable! 16:e5:a9:4e:45:a4(ip-10-83-115-236.ec2.internal) 327680 IPs (15.6% of total) - unreachable! 9a:8b:af:32:3b:0d(ip-10-83-81-239.ec2.internal) 16384 IPs (00.8% of total) - unreachable! 62:03:63:9c:81:8c(ip-10-83-124-69.ec2.internal) 32768 IPs (01.6% of total) - unreachable! ```

Weave connections

```console $ ./weave --local status connections -> 10.83.80.134:6783 failed Received update for IP range I own at 100.105.68.0 v592: incoming message says owner 12:84:ba:e1:47:79 v1460, retry: 2018-06-05 11:15:31.000788717 +0000 UTC m=+1034.303622109 -> 10.83.98.46:6783 failed Received update for IP range I own at 100.105.68.0 v592: incoming message says owner 12:84:ba:e1:47:79 v1460, retry: 2018-06-05 11:11:04.422577727 +0000 UTC m=+767.725411089 -> 10.83.126.221:6783 failed Received update for IP range I own at 100.105.68.0 v592: incoming message says owner 12:84:ba:e1:47:79 v1460, retry: 2018-06-05 11:11:33.742999475 +0000 UTC m=+797.045832929 -> 10.83.85.183:6783 failed Received update for IP range I own at 100.105.68.0 v592: incoming message says owner 12:84:ba:e1:47:79 v1460, retry: 2018-06-05 11:08:47.877010584 +0000 UTC m=+631.179843953 -> 10.83.60.42:6783 failed Received update for IP range I own at 100.105.68.0 v592: incoming message says owner 12:84:ba:e1:47:79 v1460, retry: 2018-06-05 11:09:09.444156631 +0000 UTC m=+652.746990055 -> 10.83.78.215:6783 failed Received update for IP range I own at 100.105.68.0 v592: incoming message says owner 12:84:ba:e1:47:79 v1460, retry: 2018-06-05 11:11:40.747663238 +0000 UTC m=+804.050496608 -> 10.83.124.188:6783 failed Received update for IP range I own at 100.105.68.0 v592: incoming message says owner 12:84:ba:e1:47:79 v1460, retry: 2018-06-05 11:10:38.016883177 +0000 UTC m=+741.319716603 -> 10.83.42.111:6783 failed Received update for IP range I own at 100.105.68.0 v592: incoming message says owner 12:84:ba:e1:47:79 v1460, retry: 2018-06-05 11:14:14.702258004 +0000 UTC m=+958.005091377 -> 10.83.109.188:6783 failed Received update for IP range I own at 100.105.68.0 v592: incoming message says owner 12:84:ba:e1:47:79 v1460, retry: 2018-06-05 11:11:40.776242153 +0000 UTC m=+804.079075513 -> 10.83.36.17:6783 failed Received update for IP range I own at 100.105.68.0 v592: incoming message says owner 12:84:ba:e1:47:79 v1460, retry: 2018-06-05 11:10:19.676858864 +0000 UTC m=+722.979692293 -> 10.83.59.237:6783 failed Received update for IP range I own at 100.105.68.0 v592: incoming message says owner 12:84:ba:e1:47:79 v1460, retry: 2018-06-05 11:08:27.22810513 +0000 UTC m=+610.530938541 -> 10.83.41.74:6783 failed Received update for IP range I own at 100.105.68.0 v592: incoming message says owner 12:84:ba:e1:47:79 v1460, retry: 2018-06-05 11:11:44.830033868 +0000 UTC m=+808.132867307 -> 10.83.54.199:6783 failed cannot connect to ourself, retry: never -> 10.83.87.92:6783 failed Received update for IP range I own at 100.105.68.0 v592: incoming message says owner 12:84:ba:e1:47:79 v1460, retry: 2018-06-05 11:10:32.774426814 +0000 UTC m=+736.077260261 -> 10.83.76.130:6783 failed Received update for IP range I own at 100.105.68.0 v592: incoming message says owner 12:84:ba:e1:47:79 v1460, retry: 2018-06-05 11:10:48.434728192 +0000 UTC m=+751.737561589 ```

Weave logs

```console $ kubectl logs -f -n=kube-system weave-net-5qbw9 -c=weave DEBU: 2018/06/05 11:15:30.215989 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:30.215999 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:30.415837 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:30.415863 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:30.415872 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:30.616648 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:30.616686 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:30.616699 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:30.816246 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:30.816273 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:30.816284 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 INFO: 2018/06/05 11:15:31.001044 ->[10.83.80.134:6783] attempting connection INFO: 2018/06/05 11:15:31.002845 ->[10.83.80.134:6783|3a:5f:b6:6e:dd:bb(ip-10-83-80-134.ec2.internal)]: connection ready; using protocol version 2 INFO: 2018/06/05 11:15:31.002919 overlay_switch ->[3a:5f:b6:6e:dd:bb(ip-10-83-80-134.ec2.internal)] using fastdp INFO: 2018/06/05 11:15:31.002973 ->[10.83.80.134:6783|3a:5f:b6:6e:dd:bb(ip-10-83-80-134.ec2.internal)]: connection added (new peer) INFO: 2018/06/05 11:15:31.013028 ->[10.83.80.134:6783|3a:5f:b6:6e:dd:bb(ip-10-83-80-134.ec2.internal)]: connection shutting down due to error: Received update for IP range I own at 100.105.68.0 v592: incoming message says owner 12:84:ba:e1:47:79 v1460 INFO: 2018/06/05 11:15:31.013113 ->[10.83.80.134:6783|3a:5f:b6:6e:dd:bb(ip-10-83-80-134.ec2.internal)]: connection deleted INFO: 2018/06/05 11:15:31.013140 Removed unreachable peer 12:84:ba:e1:47:79(ip-10-83-42-111.ec2.internal) INFO: 2018/06/05 11:15:31.013151 Removed unreachable peer 7e:44:ff:00:f5:f9(ip-10-83-60-42.ec2.internal) INFO: 2018/06/05 11:15:31.013160 Removed unreachable peer 9a:55:19:b7:97:7c(ip-10-83-76-130.ec2.internal) INFO: 2018/06/05 11:15:31.013168 Removed unreachable peer 3a:5f:b6:6e:dd:bb(ip-10-83-80-134.ec2.internal) INFO: 2018/06/05 11:15:31.013176 Removed unreachable peer 22:e1:23:b2:2f:8c(ip-10-83-59-237.ec2.internal) INFO: 2018/06/05 11:15:31.013185 Removed unreachable peer 0e:8c:f8:33:67:7f(ip-10-83-87-92.ec2.internal) INFO: 2018/06/05 11:15:31.013192 Removed unreachable peer ee:c9:d9:9b:bd:ec(ip-10-83-78-215.ec2.internal) INFO: 2018/06/05 11:15:31.013198 Removed unreachable peer 1a:cd:c1:53:24:bf(ip-10-83-36-17.ec2.internal) INFO: 2018/06/05 11:15:31.013204 Removed unreachable peer 9a:5d:d4:e9:78:29(ip-10-83-109-188.ec2.internal) INFO: 2018/06/05 11:15:31.013209 Removed unreachable peer 16:d8:40:86:d6:bf(ip-10-83-41-74.ec2.internal) INFO: 2018/06/05 11:15:31.013215 Removed unreachable peer 4a:d5:9f:aa:e3:99(ip-10-83-124-188.ec2.internal) INFO: 2018/06/05 11:15:31.013221 Removed unreachable peer fa:ea:3e:94:41:e0(ip-10-83-85-183.ec2.internal) INFO: 2018/06/05 11:15:31.013226 Removed unreachable peer 3a:47:b8:a9:34:ef(ip-10-83-126-221.ec2.internal) INFO: 2018/06/05 11:15:31.013232 Removed unreachable peer 6e:89:89:29:40:32(ip-10-83-98-46.ec2.internal) DEBU: 2018/06/05 11:15:31.016697 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:31.016753 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:31.016777 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:31.215970 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:31.215996 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:31.216007 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:31.416197 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:31.416222 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:31.416231 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:31.616070 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:31.616096 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:31.616105 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:31.815906 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:31.815931 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:31.815941 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:32.015766 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:32.015791 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:32.015801 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:32.216196 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:32.216221 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:32.216231 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:32.415785 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:32.415811 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:32.415821 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:32.615892 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:32.615919 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:32.615928 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:32.815841 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:32.815867 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:32.815877 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:33.016143 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:33.016172 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:33.016182 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:33.216024 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:33.216049 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:33.216060 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:33.416010 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:33.416037 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:33.416046 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:33.616145 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:33.616173 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:33.616182 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:33.815812 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:33.815836 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:33.815847 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:34.017516 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:34.017542 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:34.017552 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:34.216132 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:34.216156 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:34.216167 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:34.415847 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:34.415872 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:34.415882 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:34.615669 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:34.615696 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:34.615706 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:34.815762 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:34.815788 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:34.815798 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:35.015882 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:35.015909 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:35.015919 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:35.215745 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:35.215770 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:35.215780 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:35.415768 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:35.415793 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:35.415803 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:35.615818 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:35.615843 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:35.615853 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:35.815760 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:35.815785 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:35.815796 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:36.015913 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:36.015940 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:36.015950 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:36.215677 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:36.215705 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:36.215715 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:36.415952 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:36.415978 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:36.415988 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:36.616091 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:36.616116 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:36.616127 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:36.816059 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:36.816086 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:36.816099 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:37.016210 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:37.016237 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:37.016246 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:37.216211 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:37.216237 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:37.216247 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:37.415755 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:37.415781 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:37.415791 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:37.615952 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:37.615978 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:37.615988 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:37.815813 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:37.815838 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:37.815848 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:38.016128 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:38.016157 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:38.016167 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:38.216355 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:38.216380 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:38.216391 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:38.415765 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:38.415788 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:38.415798 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:38.616028 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:38.616055 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:38.616065 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:38.818819 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:38.818845 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:38.818854 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:39.016368 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:39.016394 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:39.016404 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:39.215962 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:39.215986 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:39.215995 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:39.415718 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:39.415745 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:39.415754 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:39.616077 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:39.616102 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:39.616117 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:39.815972 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:39.815998 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:39.816008 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:40.016211 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:40.016238 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:40.016248 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:40.216860 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:40.216886 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:40.216897 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:40.416162 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:40.416187 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:40.416197 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:40.616059 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:40.616083 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:40.616093 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:40.816285 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:40.816312 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:40.816322 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:41.016070 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:41.016098 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:41.016109 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:41.216637 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:41.216663 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:41.216673 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:41.416124 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:41.416188 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:41.416220 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:41.616358 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:41.616383 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:41.616393 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:41.815816 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:41.815841 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:41.815850 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:42.016532 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:42.016558 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:42.016568 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:42.217878 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:42.217905 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:42.217919 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:42.415987 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:42.416019 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:42.416033 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:42.616205 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:42.616230 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:42.616240 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:42.816255 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:42.816280 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:42.816290 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:43.016187 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:43.016212 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:43.016223 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:43.216480 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:43.216503 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:43.216513 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:43.416036 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:43.416063 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:43.416074 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:43.615932 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:15:43.615960 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:15:43.615971 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:15:43.815733 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] ```


From a node that has already claimed the IP range (ip-10-83-42-111.ec2.internal):

Weave IPAM

```console $ ./weave --local status ipam 12:84:ba:e1:47:79(ip-10-83-42-111.ec2.internal) 1024 IPs (00.0% of total) (61 active) 16:d8:40:86:d6:bf(ip-10-83-41-74.ec2.internal) 334256 IPs (15.9% of total) 3a:47:b8:a9:34:ef(ip-10-83-126-221.ec2.internal) 19504 IPs (00.9% of total) 1a:cd:c1:53:24:bf(ip-10-83-36-17.ec2.internal) 507856 IPs (24.2% of total) 22:e1:23:b2:2f:8c(ip-10-83-59-237.ec2.internal) 190464 IPs (09.1% of total) fa:ea:3e:94:41:e0(ip-10-83-85-183.ec2.internal) 16384 IPs (00.8% of total) 6a:5c:7c:af:e3:2e(ip-10-83-54-199.ec2.internal) 296016 IPs (14.1% of total) - unreachable! 6e:89:89:29:40:32(ip-10-83-98-46.ec2.internal) 16388 IPs (00.8% of total) ee:c9:d9:9b:bd:ec(ip-10-83-78-215.ec2.internal) 4608 IPs (00.2% of total) 4a:d5:9f:aa:e3:99(ip-10-83-124-188.ec2.internal) 8192 IPs (00.4% of total) 9a:55:19:b7:97:7c(ip-10-83-76-130.ec2.internal) 313344 IPs (14.9% of total) 3a:5f:b6:6e:dd:bb(ip-10-83-80-134.ec2.internal) 367612 IPs (17.5% of total) 0e:8c:f8:33:67:7f(ip-10-83-87-92.ec2.internal) 16384 IPs (00.8% of total) 9a:5d:d4:e9:78:29(ip-10-83-109-188.ec2.internal) 1024 IPs (00.0% of total) 7e:44:ff:00:f5:f9(ip-10-83-60-42.ec2.internal) 4096 IPs (00.2% of total) ```

Weave connections

```console $ ./weave --local status connections -> 10.83.41.74:6783 established fastdp 16:d8:40:86:d6:bf(ip-10-83-41-74.ec2.internal) mtu=1376 -> 10.83.126.221:6783 established fastdp 3a:47:b8:a9:34:ef(ip-10-83-126-221.ec2.internal) mtu=1376 -> 10.83.76.130:6783 established fastdp 9a:55:19:b7:97:7c(ip-10-83-76-130.ec2.internal) mtu=1376 -> 10.83.109.188:6783 established fastdp 9a:5d:d4:e9:78:29(ip-10-83-109-188.ec2.internal) mtu=1376 -> 10.83.60.42:6783 established fastdp 7e:44:ff:00:f5:f9(ip-10-83-60-42.ec2.internal) mtu=1376 -> 10.83.98.46:6783 established fastdp 6e:89:89:29:40:32(ip-10-83-98-46.ec2.internal) mtu=1376 -> 10.83.85.183:6783 established fastdp fa:ea:3e:94:41:e0(ip-10-83-85-183.ec2.internal) mtu=1376 -> 10.83.124.188:6783 established fastdp 4a:d5:9f:aa:e3:99(ip-10-83-124-188.ec2.internal) mtu=1376 -> 10.83.36.17:6783 established fastdp 1a:cd:c1:53:24:bf(ip-10-83-36-17.ec2.internal) mtu=1376 -> 10.83.78.215:6783 established fastdp ee:c9:d9:9b:bd:ec(ip-10-83-78-215.ec2.internal) mtu=1376 -> 10.83.80.134:6783 established fastdp 3a:5f:b6:6e:dd:bb(ip-10-83-80-134.ec2.internal) mtu=1376 -> 10.83.59.237:6783 established fastdp 22:e1:23:b2:2f:8c(ip-10-83-59-237.ec2.internal) mtu=1376 -> 10.83.87.92:6783 established fastdp 0e:8c:f8:33:67:7f(ip-10-83-87-92.ec2.internal) mtu=1376 -> 10.83.42.111:6783 failed cannot connect to ourself, retry: never -> 10.83.54.199:6783 failed read tcp4 10.83.42.111:48865->10.83.54.199:6783: read: connection reset by peer, retry: 2018-06-05 11:15:00.712417081 +0000 UTC m=+836.618070318 ```

Weave logs

```console $ kubectl logs -f -n=kube-system weave-net-tk4np -c=weave DEBU: 2018/06/05 11:20:20.171724 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:20.373726 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:20.373752 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:20.373759 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:20.571557 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:20.571584 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:20.571591 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 INFO: 2018/06/05 11:20:20.701216 Removed unreachable peer 6a:5c:7c:af:e3:2e(ip-10-83-54-199.ec2.internal) DEBU: 2018/06/05 11:20:20.771492 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:20.771516 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:20.771525 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:20.971641 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:20.971671 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:20.971684 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:21.171472 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:21.171499 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:21.171506 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:21.371635 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:21.371660 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:21.371668 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:21.571635 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:21.571663 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:21.571673 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:21.771057 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:21.771085 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:21.771092 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:21.971717 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:21.971743 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:21.971751 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:22.171426 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:22.171453 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:22.171463 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:22.371484 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:22.371510 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:22.371520 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:22.571555 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:22.571581 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:22.571589 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 INFO: 2018/06/05 11:20:22.743656 Discovered remote MAC 5a:d8:6a:42:96:2e at 9a:5d:d4:e9:78:29(ip-10-83-109-188.ec2.internal) DEBU: 2018/06/05 11:20:22.771318 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:22.771341 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:22.771352 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:22.971001 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:22.971027 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:22.971035 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:23.171592 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:23.171617 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:23.171625 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:23.371614 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:23.371644 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:23.371652 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:23.574286 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:23.574312 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:23.574321 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:23.771439 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:23.771463 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:23.771471 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:23.971544 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:23.971571 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:23.971579 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:24.171478 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:24.171503 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:24.171512 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:24.371619 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:24.371645 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:24.371653 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:24.571288 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:24.571313 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:24.571321 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:24.771667 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:24.771712 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:24.771726 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:24.971541 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:24.971567 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:24.971575 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:25.171645 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:25.171675 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:25.171690 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:25.371589 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:25.371618 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:25.371629 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:25.571137 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:25.571170 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:25.571181 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:25.771402 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:25.771430 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:25.771438 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:25.971431 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:25.971459 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:25.971468 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:26.171483 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:26.171511 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:26.171521 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:26.371503 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:26.371531 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:26.371542 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:26.571542 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:26.571571 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:26.571581 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:26.771073 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:26.771102 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:26.771113 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:26.971435 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:26.971462 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:26.971469 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 INFO: 2018/06/05 11:20:27.070323 Removed unreachable peer 6a:5c:7c:af:e3:2e(ip-10-83-54-199.ec2.internal) DEBU: 2018/06/05 11:20:27.171473 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:27.171498 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:27.171506 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:27.371505 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:27.371530 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:27.371538 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:27.574060 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:27.574088 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:27.574097 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:27.771457 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:27.771483 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:27.771492 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:27.972061 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:27.972087 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:27.972095 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:28.171557 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:28.171583 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:28.171592 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:28.371123 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:28.371149 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:28.371157 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:28.571669 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:28.571695 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:28.571703 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 INFO: 2018/06/05 11:20:28.677615 Discovered remote MAC ea:2d:28:ff:48:cf at ee:c9:d9:9b:bd:ec(ip-10-83-78-215.ec2.internal) DEBU: 2018/06/05 11:20:28.771496 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:28.771524 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:28.771533 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:28.971517 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:28.971545 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:28.971552 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:29.171549 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:29.171575 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:29.171584 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:29.371325 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:29.371351 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:29.371413 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 DEBU: 2018/06/05 11:20:29.571705 [kube-peers] Nodes that have disappeared: map[ip-10-83-124-112.ec2.internal:{46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal}] DEBU: 2018/06/05 11:20:29.571731 [kube-peers] Preparing to remove disappeared peer {46:f4:4b:41:dd:11 ip-10-83-124-112.ec2.internal} DEBU: 2018/06/05 11:20:29.571739 [kube-peers] Existing annotation 46:f4:4b:41:dd:11 ```

bboreham commented 6 years ago

Please open a new issue, don't comment on old issues that you think to be the same.

This issue was raised a while back #3190

Absolutely not. That issue had a very clear cause, and was fixed.

When you open a new issue, you will be asked for details that will help to trouble-shoot. Particularly the log files of the weave containers. We need the logs from the first time when it went wrong; without that it is very unlikely we can debug.

Restarting the weave pods [...] does not change the situation.

Yes, the data is persisted to disk on each node under /var/lib/weave.

zacblazic commented 6 years ago

Please open a new issue, don't comment on old issues that you think to be the same.

Will do.

Particularly the log files of the weave containers. We need the logs from the first time when it went wrong; without that it is very unlikely we can debug.

Not sure if this is just a default response, but the logs are in my comment above.

This issue was raised a while back #3190

Absolutely not. That issue had a very clear cause, and was fixed.

I actually put this in the wrong section of my comment, it should have been under "non-existent peer". However, I think you're right.

bboreham commented 6 years ago

the logs are in my comment above.

Ha! I had no idea the little black triangle would open up to show more.

bboreham commented 6 years ago

Existing annotation 46:f4:4b:41:dd:11 suggests there was a peer with that unique ID that crashed mid-reclaim. As you say, it doesn't exist now. That is causing an infinite loop. That isn't related to your reported symptom, but it's worth addressing.

There's nothing in those logs to say what caused the original problem, unfortunately.

bboreham commented 6 years ago

I added a bit in the docs about the /var/lib/weave data https://www.weave.works/docs/net/latest/tasks/ipam/troubleshooting-ipam/#seeded-different-peers

zacblazic commented 6 years ago

Existing annotation 46:f4:4b:41:dd:11 suggests there was a peer with that unique ID that crashed mid-reclaim. As you say, it doesn't exist now. That is causing an infinite loop. That isn't related to your reported symptom, but it's worth addressing.

There's nothing in those logs to say what caused the original problem, unfortunately.

Thanks for the feedback!

Should I create a separate issue for this? We're still seeing it flood our logs of all weave containers.

bboreham commented 6 years ago

Sure; the infinite loop is at least something we can see the cause of.

zacblazic commented 6 years ago

We're still seeing it flood our logs of all weave containers.

I stand corrected, its only happening on 4 of the 15 weave containers.

Raffo commented 6 years ago

@zacblazic We're having the same issue with the same kops/weave versions you reported, is there any fix or workaround? I also don't see any new issue being created.

zetaab commented 6 years ago

I have also this issue with 2.3.0 and 2.4.0 weave(tried upgrading if it helps, but it did not). One of my nodes does not talk to another cluster, the logs looks like exact same than in first post. Restarting machines etc does not help

Tmp fix is that I have shutdowned that one node and now everything seems working normal.

hagaibarel commented 6 years ago

Having the same issue with 2.3.0 and 2.4.0, logs look the same as for the first post. Rebooting the misbehaving nodes appears to help

bboreham commented 6 years ago

@HagaiBarel please open a new issue; it really helps if the conversations are kept separate.