Closed kesavkolla closed 9 years ago
please send us the output of weave report
.
{
"Version": "1.2.0",
"Router": {
"Protocol": "weave",
"ProtocolMinVersion": 1,
"ProtocolMaxVersion": 2,
"Encryption": true,
"PeerDiscovery": true,
"Name": "32:51:fc:ed:2b:14",
"NickName": "hotelsoft-local2",
"Port": 6783,
"Interface": "veth-weave (via pcap)",
"CaptureStats": {
"PacketsDropped": 0,
"PacketsIfDropped": 0,
"PacketsReceived": 11
},
"MACs": null,
"Peers": [
{
"Name": "32:51:fc:ed:2b:14",
"NickName": "hotelsoft-local2",
"UID": 14535644035616940483,
"ShortID": 1120,
"Version": 12,
"Connections": [
{
"Name": "f6:f1:a1:20:85:49",
"NickName": "hotelsoft-local3",
"Address": "192.168.2.12:52806",
"Outbound": false,
"Established": true
},
{
"Name": "e2:f7:eb:e6:1d:05",
"NickName": "hotelsoft-local4",
"Address": "192.168.2.13:33188",
"Outbound": false,
"Established": true
},
{
"Name": "3a:a6:1b:6e:98:51",
"NickName": "c130",
"Address": "192.168.2.20:36315",
"Outbound": false,
"Established": true
}
]
},
{
"Name": "3a:a6:1b:6e:98:51",
"NickName": "c130",
"UID": 7173644225978504253,
"ShortID": 0,
"Version": 177,
"Connections": [
{
"Name": "e2:f7:eb:e6:1d:05",
"NickName": "hotelsoft-local4",
"Address": "192.168.2.13:6783",
"Outbound": true,
"Established": true
},
{
"Name": "32:51:fc:ed:2b:14",
"NickName": "hotelsoft-local2",
"Address": "192.168.2.11:6783",
"Outbound": true,
"Established": true
},
{
"Name": "f6:f1:a1:20:85:49",
"NickName": "hotelsoft-local3",
"Address": "192.168.2.12:6783",
"Outbound": true,
"Established": true
}
]
},
{
"Name": "f6:f1:a1:20:85:49",
"NickName": "hotelsoft-local3",
"UID": 1416065523893196860,
"ShortID": 1627,
"Version": 8,
"Connections": [
{
"Name": "e2:f7:eb:e6:1d:05",
"NickName": "hotelsoft-local4",
"Address": "192.168.2.13:43660",
"Outbound": false,
"Established": true
},
{
"Name": "32:51:fc:ed:2b:14",
"NickName": "hotelsoft-local2",
"Address": "192.168.2.11:6783",
"Outbound": true,
"Established": true
},
{
"Name": "3a:a6:1b:6e:98:51",
"NickName": "c130",
"Address": "192.168.2.20:55908",
"Outbound": false,
"Established": true
}
]
},
{
"Name": "e2:f7:eb:e6:1d:05",
"NickName": "hotelsoft-local4",
"UID": 6128004610217879103,
"ShortID": 3538,
"Version": 8,
"Connections": [
{
"Name": "32:51:fc:ed:2b:14",
"NickName": "hotelsoft-local2",
"Address": "192.168.2.11:6783",
"Outbound": true,
"Established": true
},
{
"Name": "3a:a6:1b:6e:98:51",
"NickName": "c130",
"Address": "192.168.2.20:36264",
"Outbound": false,
"Established": true
},
{
"Name": "f6:f1:a1:20:85:49",
"NickName": "hotelsoft-local3",
"Address": "192.168.2.12:6783",
"Outbound": true,
"Established": true
}
]
}
],
"UnicastRoutes": [
{
"Dest": "32:51:fc:ed:2b:14",
"Via": "00:00:00:00:00:00"
},
{
"Dest": "3a:a6:1b:6e:98:51",
"Via": "3a:a6:1b:6e:98:51"
},
{
"Dest": "f6:f1:a1:20:85:49",
"Via": "f6:f1:a1:20:85:49"
},
{
"Dest": "e2:f7:eb:e6:1d:05",
"Via": "e2:f7:eb:e6:1d:05"
}
],
"BroadcastRoutes": [
{
"Source": "e2:f7:eb:e6:1d:05",
"Via": null
},
{
"Source": "32:51:fc:ed:2b:14",
"Via": [
"3a:a6:1b:6e:98:51",
"f6:f1:a1:20:85:49",
"e2:f7:eb:e6:1d:05"
]
},
{
"Source": "3a:a6:1b:6e:98:51",
"Via": null
},
{
"Source": "f6:f1:a1:20:85:49",
"Via": null
}
],
"Connections": [
{
"Address": "192.168.2.20:36315",
"Outbound": false,
"State": "established",
"Info": "sleeve 3a:a6:1b:6e:98:51(c130)"
},
{
"Address": "192.168.2.12:52806",
"Outbound": false,
"State": "established",
"Info": "sleeve f6:f1:a1:20:85:49(hotelsoft-local3)"
},
{
"Address": "192.168.2.13:33188",
"Outbound": false,
"State": "established",
"Info": "sleeve e2:f7:eb:e6:1d:05(hotelsoft-local4)"
}
],
"Targets": null,
"OverlayDiagnostics": {
"sleeve": null
}
},
"IPAM": {
"Paxos": null,
"Range": "[10.32.0.0-10.48.0.0)",
"DefaultSubnet": "10.32.0.0/12",
"Entries": [
{
"Token": "10.32.0.0",
"Peer": "82:65:f9:36:ad:75",
"Version": 34
}
],
"PendingClaims": null,
"PendingAllocates": null
},
"DNS": {
"Domain": "weave.local.",
"Address": "172.17.0.1:53",
"TTL": 1,
"Entries": null
}
}
So the entire IP allocation range is owned by 82:65:f9:36:ad:75, which doesn't seem to be alive.
The question is how you got into this state...
Did you do a "rolling" weave reset
? See #1593
Did you shut down nodes without subsequently running weave rmpeer
? See http://docs.weave.works/weave/latest_release/ipam.html#stop
I just installed all new version of weave on all 3 nodes and restarted the machines. Is there any way to undo this state?
I just installed all new version of weave on all 3 nodes and restarted the machines.
Did you restart them one at a time?
Is there any way to undo this state?
Yes, see the rmpeer
link above.
Just used ansible to restart all 3 nodes. They were all restarted at the same time
they were all restarted at the same time
did all the nodes first get shut down, and then restarted? Or is it possible that some node restarted while another was still alive?
Yes the node was shutdown while others were running. I was updating the storage drive to overlay so I did one node at a time. Like restarting node one at a time. So it could be possible during that time things might have got messed up.
Resolved by removing dead peer.
Thanks @rade for the inputs.
I'm not able to start any container with weave. Here are details of my system
docer info
weave version
weave status
When I run any container it just hangs
docker run -it --rm phusion/baseimage bash -l
This command just hangs.
Here is the log of weaveproxy
Here is the output of
ps -eaf | grep weave
Looks like weave is not able to attach IP addresses.
Please provide me some instructions on how to make things work.
-Kesav