openvstorage / framework

The Framework is a set of components and tools which brings the user an interface (GUI / API) to setup, extend and manage an Open vStorage platform.
Other
27 stars 24 forks source link

Ovs remove node gave 'Failed to forget RabbitMQ cluster node' #1542

Open JeffreyDevloo opened 7 years ago

JeffreyDevloo commented 7 years ago

Problem description

Removing a master node from my cluster was successful however some issues got logged along the way. The most unfamiliar to me: Failed to forget RabbitMQ cluster node

What happened

All nodes were online during the removal

Potential root of the problem

It could be, stated by @khenderick, that the sshclient is given a storagerouter object that did not update its property yet and therefore assuming that the node is down.

Proposed solution

Start the removal with the ip to rebuild the client.

stdout output of my test:

Certain cases like the replying are my logs and thus can be ignore for this part. It indicates what step I have taken during the removal.

ssh://root@10.100.69.120:22/usr/bin/python -u /opt/OpenvStorage/ci/scenarios/cluster/modify_cluster_structure/main.py
2017-03-20 17:24:50 37600 +0100 - jef-node1 - 2836/140200953698048 - helpers/kvm sdk - 0 - DEBUG - Init libvirt
2017-03-20 17:24:51 40800 +0100 - jef-node1 - 2836/140200953698048 - helpers/kvm sdk - 1 - DEBUG - Init connection: 10.100.69.84, root, 0, 0
2017-03-20 17:24:51 76200 +0100 - jef-node1 - 2836/140200953698048 - helpers/kvm sdk - 2 - DEBUG - Init complete
2017-03-20 17:24:51 80700 +0100 - jef-node1 - 2836/140200953698048 - scenario/ci_scenario_cluster_changes - 3 - INFO - Starting case remove_online_master. Layout: {'extras_to_remove': 0, 'masters_to_remove': 1, 'masters_to_add': 1, 'extras_to_add': 0, 'online': True}.
2017-03-20 17:24:51 81100 +0100 - jef-node1 - 2836/140200953698048 - scenario/ci_scenario_cluster_changes - 4 - INFO - Will move towards: {'add': {'master': [<StorageRouter (guid: 08b00e1e-b748-4294-ae16-f161ec957b61, at: 0x7f82f45f6b50)>], 'extra': []}, 'remove': {'master': [<StorageRouter (guid: 08b00e1e-b748-4294-ae16-f161ec957b61, at: 0x7f82f45f6b50)>], 'extra': []}}.
2017-03-20 17:24:51 81100 +0100 - jef-node1 - 2836/140200953698048 - scenario/ci_scenario_cluster_changes - 5 - DEBUG - Starting removal.
2017-03-20 17:24:51 81200 +0100 - jef-node1 - 2836/140200953698048 - scenario/ci_scenario_cluster_changes - 6 - DEBUG - Removing master node jef-node2 - 10.100.69.121
+++++++++++++++++++++
+++  Remove node  +++
+++++++++++++++++++++
WARNING: Some of these steps may take a very long time, please check the logs for more information

Creating SSH connections to remaining master nodes
  Node with IP 10.100.69.120   successfully connected to
  Node with IP 10.100.69.121   is unreachable
  Node with IP 10.100.69.122   is unreachable

+++ Running "noderemoval - validate_removal" hooks +++

Executing alba._validate_removal
Are you sure you want to remove node jef-node2? (y/[n]): 2017-03-20 17:24:53 00200 +0100 - jef-node1 - 2836/140200953698048 - scenario/ci_scenario_cluster_changes - 7 - DEBUG - Found entry Are you sure you want to remove node.
2017-03-20 17:24:53 00300 +0100 - jef-node1 - 2836/140200953698048 - scenario/ci_scenario_cluster_changes - 8 - DEBUG - Replying with "y".
y
y
Starting removal of node jef-node2 - 10.100.69.121
  Marking all Storage Drivers served by Storage Router 10.100.69.121 as offline
  Removing vPools from node

+++ Demoting node +++

Leaving Arakoon ovsdb cluster
Leaving Arakoon config cluster
ERROR: 
Failed to leave configuration cluster
ERROR: Could not determine the Arakoon master node
Update configurations
Restarting master node services
Removing/unconfiguring offline RabbitMQ node
ERROR: 
Failed to forget RabbitMQ cluster node
ERROR: Command ''rabbitmqctl' 'forget_cluster_node' 'rabbit@jef-node2'' returned non-zero exit status 2
Update existing vPools
Restarting services
  [10.100.69.120] watcher-framework stopped
  [10.100.69.120] memcached restarted
  [10.100.69.120] watcher-framework started
  [10.100.69.120] support-agent restarted

+++ Running "nodetype - demote" hooks +++

Executing storagedriver._on_demote
Executing alba._on_demote
Restarting services
  [10.100.69.120] watcher-framework stopped
  [10.100.69.120] memcached restarted
  [10.100.69.120] watcher-framework started
  [10.100.69.120] support-agent restarted

+++ Demote complete +++

Stopping and removing services

+++ Running "noderemoval - remove" hooks +++

Executing storagedriver._on_remove
Executing alba._on_remove
Removing node from model
  [10.100.69.120] watcher-framework stopped
  [10.100.69.120] memcached restarted
  [10.100.69.120] watcher-framework started
  [10.100.69.120] support-agent restarted
Successfully removed node

+++ Remove nodes finished +++

2017-03-20 17:26:03 49800 +0100 - jef-node1 - 2836/140200953698048 - scenario/ci_scenario_cluster_changes - 9 - DEBUG - Found entry +++ Remove nodes finished.
2017-03-20 17:26:03 49800 +0100 - jef-node1 - 2836/140200953698048 - scenario/ci_scenario_cluster_changes - 10 - DEBUG - Replying with "".

lib.log

lig.log on the node that initiated the remove

2017-03-20 17:24:52 67400 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 0 - INFO - Remove node
2017-03-20 17:24:52 67500 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 1 - INFO - WARNING: Some of these steps may take a very long time, please check the logs for more information

2017-03-20 17:24:52 75700 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 2 - INFO - Creating SSH connections to remaining master nodes
2017-03-20 17:24:52 79700 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 3 - INFO -   Node with IP 10.100.69.120   successfully connected to
2017-03-20 17:24:52 82000 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 5 - INFO -   Node with IP 10.100.69.121   is unreachable
2017-03-20 17:24:52 84100 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 7 - INFO -   Node with IP 10.100.69.122   is unreachable
2017-03-20 17:24:52 99100 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 8 - INFO - Running "noderemoval - validate_removal" hooks
2017-03-20 17:24:52 99200 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 9 - INFO - Executing alba._validate_removal
2017-03-20 17:24:53 15800 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 10 - INFO - Starting removal of node jef-node2 - 10.100.69.121
2017-03-20 17:24:53 15900 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 11 - INFO -   Marking all Storage Drivers served by Storage Router 10.100.69.121 as offline
2017-03-20 17:24:53 16300 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 12 - INFO -   Removing vPools from node
2017-03-20 17:24:53 16500 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 13 - INFO - Demoting node
2017-03-20 17:24:53 21700 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 14 - INFO - Leaving Arakoon ovsdb cluster
2017-03-20 17:24:54 08400 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 26 - INFO - Leaving Arakoon config cluster
2017-03-20 17:25:55 23800 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 190 - ERROR - 
Failed to leave configuration cluster
Traceback (most recent call last):
  File "ovs/lib/nodetype.py", line 391, in demote_node
    offline_nodes=offline_node_ips)
  File "ovs/extensions/db/arakoon/ArakoonInstaller.py", line 550, in shrink_cluster
    arakoon_client = ArakoonInstaller._wait_for_cluster(config=config)
  File "ovs/extensions/db/arakoon/ArakoonInstaller.py", line 794, in _wait_for_cluster
    arakoon_client.nop()
  File "ovs/extensions/db/arakoon/pyrakoon/client.py", line 44, in new_function
    return f(self, *args, **kw)
  File "ovs/extensions/db/arakoon/pyrakoon/client.py", line 160, in nop
    return PyrakoonClient._try(self._identifier, self._client.nop)
  File "ovs/extensions/db/arakoon/pyrakoon/client.py", line 213, in _try
    return_value = method(*args, **kwargs)
  File "<update_argspec>", line 5, in nop
  File "ovs/extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 145, in wrapped
    return fun(*args, **kwargs)
  File "ovs/extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 624, in nop
    return self._client.nop()
  File "<update_argspec>", line 5, in nop
  File "ovs/extensions/db/arakoon/pyrakoon/pyrakoon/client/utils.py", line 99, in wrapped
    return self._process(message) #pylint: disable=W0212
  File "ovs/extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1099, in _process
    connection = self._send_to_master(bytes_)
  File "ovs/extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1158, in _send_to_master
    self.determine_master()
  File "ovs/extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1201, in determine_master
    raise ArakoonNoMaster
ArakoonNoMaster: Could not determine the Arakoon master node
2017-03-20 17:25:55 24100 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 191 - ERROR - Could not determine the Arakoon master node
Traceback (most recent call last):
  File "ovs/lib/nodetype.py", line 391, in demote_node
    offline_nodes=offline_node_ips)
  File "ovs/extensions/db/arakoon/ArakoonInstaller.py", line 550, in shrink_cluster
    arakoon_client = ArakoonInstaller._wait_for_cluster(config=config)
  File "ovs/extensions/db/arakoon/ArakoonInstaller.py", line 794, in _wait_for_cluster
    arakoon_client.nop()
  File "ovs/extensions/db/arakoon/pyrakoon/client.py", line 44, in new_function
    return f(self, *args, **kw)
  File "ovs/extensions/db/arakoon/pyrakoon/client.py", line 160, in nop
    return PyrakoonClient._try(self._identifier, self._client.nop)
  File "ovs/extensions/db/arakoon/pyrakoon/client.py", line 213, in _try
    return_value = method(*args, **kwargs)
  File "<update_argspec>", line 5, in nop
  File "ovs/extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 145, in wrapped
    return fun(*args, **kwargs)
  File "ovs/extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 624, in nop
    return self._client.nop()
  File "<update_argspec>", line 5, in nop
  File "ovs/extensions/db/arakoon/pyrakoon/pyrakoon/client/utils.py", line 99, in wrapped
    return self._process(message) #pylint: disable=W0212
  File "ovs/extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1099, in _process
    connection = self._send_to_master(bytes_)
  File "ovs/extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1158, in _send_to_master
    self.determine_master()
  File "ovs/extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1201, in determine_master
    raise ArakoonNoMaster
ArakoonNoMaster: Could not determine the Arakoon master node
2017-03-20 17:25:55 24200 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 192 - INFO - Update configurations
2017-03-20 17:25:55 26000 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 193 - INFO - Restarting master node services
2017-03-20 17:25:55 38100 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 194 - INFO - Removing/unconfiguring offline RabbitMQ node
2017-03-20 17:25:55 100000 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 196 - ERROR - 
Failed to forget RabbitMQ cluster node
Traceback (most recent call last):
  File "ovs/lib/nodetype.py", line 431, in demote_node
    client.run(['rabbitmqctl', 'forget_cluster_node', 'rabbit@{0}'.format(storagerouter.name)])
  File "ovs/extensions/generic/sshclient.py", line 61, in inner_function
    return outer_function(self, *args, **kwargs)
  File "ovs/extensions/generic/sshclient.py", line 330, in run
    raise CalledProcessError(exit_code, original_command, stdout)
CalledProcessError: Command ''rabbitmqctl' 'forget_cluster_node' 'rabbit@jef-node2'' returned non-zero exit status 2
2017-03-20 17:25:55 100000 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 197 - ERROR - Command ''rabbitmqctl' 'forget_cluster_node' 'rabbit@jef-node2'' returned non-zero exit status 2
Traceback (most recent call last):
  File "ovs/lib/nodetype.py", line 431, in demote_node
    client.run(['rabbitmqctl', 'forget_cluster_node', 'rabbit@{0}'.format(storagerouter.name)])
  File "ovs/extensions/generic/sshclient.py", line 61, in inner_function
    return outer_function(self, *args, **kwargs)
  File "ovs/extensions/generic/sshclient.py", line 330, in run
    raise CalledProcessError(exit_code, original_command, stdout)
CalledProcessError: Command ''rabbitmqctl' 'forget_cluster_node' 'rabbit@jef-node2'' returned non-zero exit status 2
2017-03-20 17:25:56 00000 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 198 - INFO - Update existing vPools
2017-03-20 17:25:56 00500 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 199 - INFO - Restarting services
2017-03-20 17:25:56 05600 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 200 - DEBUG - 10.100.69.120: Stopping service watcher-framework
2017-03-20 17:25:57 71400 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 201 - DEBUG - 10.100.69.120: Stopped service watcher-framework
2017-03-20 17:25:57 73500 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 202 - DEBUG - 10.100.69.120: Restarting service memcached
2017-03-20 17:25:57 90200 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 203 - DEBUG - 10.100.69.120: Restarted service memcached
2017-03-20 17:25:57 93500 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 204 - DEBUG - 10.100.69.120: Starting service watcher-framework
2017-03-20 17:25:58 48100 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 205 - DEBUG - 10.100.69.120: Started service watcher-framework
2017-03-20 17:25:58 51200 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 206 - DEBUG - 10.100.69.120: Restarting service support-agent
2017-03-20 17:25:58 72300 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 207 - DEBUG - 10.100.69.120: Restarted service support-agent
2017-03-20 17:25:58 74900 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 208 - INFO - Running "nodetype - demote" hooks
2017-03-20 17:25:58 75000 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 209 - INFO - Executing storagedriver._on_demote
2017-03-20 17:25:58 76700 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 210 - INFO - Executing alba._on_demote
2017-03-20 17:25:58 79200 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 211 - INFO - Restarting services
2017-03-20 17:25:58 82300 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 212 - DEBUG - 10.100.69.120: Stopping service watcher-framework
2017-03-20 17:25:58 88200 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 213 - DEBUG - 10.100.69.120: Stopped service watcher-framework
2017-03-20 17:25:58 89800 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 214 - DEBUG - 10.100.69.120: Restarting service memcached
2017-03-20 17:25:59 10500 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 215 - DEBUG - 10.100.69.120: Restarted service memcached
2017-03-20 17:25:59 12500 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 216 - DEBUG - 10.100.69.120: Starting service watcher-framework
2017-03-20 17:25:59 64400 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 217 - DEBUG - 10.100.69.120: Started service watcher-framework
2017-03-20 17:25:59 68000 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 218 - DEBUG - 10.100.69.120: Restarting service support-agent
2017-03-20 17:25:59 91100 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 219 - DEBUG - 10.100.69.120: Restarted service support-agent
2017-03-20 17:25:59 92300 +0100 - jef-node1 - 2884/140531680835328 - lib/node-type - 220 - INFO - Demote complete
2017-03-20 17:25:59 92400 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 221 - INFO - Stopping and removing services
2017-03-20 17:25:59 95700 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 222 - INFO - Running "noderemoval - remove" hooks
2017-03-20 17:25:59 95800 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 223 - INFO - Executing storagedriver._on_remove
2017-03-20 17:26:01 34300 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 224 - INFO - Executing alba._on_remove
2017-03-20 17:26:01 59700 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 225 - INFO - Removing node from model
2017-03-20 17:26:01 87900 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 226 - DEBUG - 10.100.69.120: Stopping service watcher-framework
2017-03-20 17:26:02 37100 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 227 - DEBUG - 10.100.69.120: Stopped service watcher-framework
2017-03-20 17:26:02 39300 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 228 - DEBUG - 10.100.69.120: Restarting service memcached
2017-03-20 17:26:02 60100 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 229 - DEBUG - 10.100.69.120: Restarted service memcached
2017-03-20 17:26:02 63100 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 230 - DEBUG - 10.100.69.120: Starting service watcher-framework
2017-03-20 17:26:03 19600 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 231 - DEBUG - 10.100.69.120: Started service watcher-framework
2017-03-20 17:26:03 23600 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 232 - DEBUG - 10.100.69.120: Restarting service support-agent
2017-03-20 17:26:03 45500 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 233 - DEBUG - 10.100.69.120: Restarted service support-agent
2017-03-20 17:26:03 49700 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 234 - INFO - Successfully removed node

2017-03-20 17:26:03 49800 +0100 - jef-node1 - 2884/140531680835328 - lib/node-removal - 235 - INFO - Remove nodes finished
wimpers commented 7 years ago

@JeffreyDevloo

we are a bit clueless. All nodes were up but it says

  Node with IP 10.100.69.121   is unreachable
  Node with IP 10.100.69.122   is unreachable

Can we reproduce this when all nodes are reachable?

JeffreyDevloo commented 7 years ago

That is the point. The nodes were online but the client could have been cached The nodes were online but the removal says they were not.

wimpers commented 7 years ago

@JeffreyDevloo is this somehow related to the other SSH timeout issues you are seeing?

JeffreyDevloo commented 6 years ago

@wimpers I can't tell... There was a command which failed which means the SSHClient could connect to the node so I suppose not. Either way it is something still to be verified what went wrong

wimpers commented 6 years ago

Might this be related to the SSH caching of the connections?

JeffreyDevloo commented 6 years ago

No, the execution of the command actually worked. If it would have been the cached services that was killed, we would see an EOFError instead