Closed wimpers closed 7 years ago
openvstorage-2.7.3-rev.4074.c43bba5
@QA: It should now be possible to move a volume through the GUI and through the API
Normal moving case. Api will be using None as a target.
Testing code
# Copyright (C) 2016 iNuron NV
#
# This file is part of Open vStorage Open Source Edition (OSE),
# as available from
#
# http://www.openvstorage.org and
# http://www.openvstorage.com.
#
# This file is free software; you can redistribute it and/or modify it
# under the terms of the GNU Affero General Public License v3 (GNU AGPLv3)
# as published by the Free Software Foundation, in version 3 as it comes
# in the LICENSE.txt file of the Open vStorage OSE distribution.
#
# Open vStorage is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY of any kind.
from ci.helpers.api import OVSClient
class VDiskTest(object):
def __init__(self):
self.api = OVSClient('10.100.199.151', 'admin', 'admin')
self.vdisk_guid = "f395d9bc-b7b3-441c-8028-011fd1d4bd68"
# Simple manual mapping for testing
self.nodes = {
"node1": "0773a2bb-93fe-4a3d-b4ab-ccdd356ef2d4",
"node2": "75219e4b-05b0-4608-a97a-275d7f3d364e",
"node3": "6ced823f-c12d-47e1-ab23-e64c0c32b6f6"
}
def run(self):
# Run all test cases
pass
def test_move(self, node):
task_success = False
if node in self.nodes:
task_success = self._execute_move(self.vdisk_guid, self.nodes[node])
else:
# Breaking cases
task_success = self._execute_move(self.vdisk_guid, node)
# Should always be true or a error will be thrown in the move
if task_success is True:
print "Succesfully moved the vdisk"
def _execute_move(self, vdisk_guid, target_storagerouter_guid):
"""
Execute move call
:param vdisk_guid: guid of the vdisk to move
:param target_storagerouter_guid: storagerouter guid of the target
:return:
"""
data = {
"target_storagerouter_guid": target_storagerouter_guid
}
task_guid = self.api.post('/vdisks/{0}/move'.format(vdisk_guid), data)
task_result = self.api.wait_for_task(task_id=task_guid, timeout=30)
if not task_result[0]:
error_msg = "Moving vdisk '{0}' with to storagerouter '{1}' failed with '{2}'".format(vdisk_guid,target_storagerouter_guid,task_result[1])
raise RuntimeError(error_msg)
return task_result[0]
if __name__ == "__main__":
VDiskTest().run()
Supplying target 'None' as I expect no other storage drivers.
In [14]: VDiskTest().test_move(None)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-14-aefc3723d67b> in <module>()
----> 1 VDiskTest().test_move(None)
<ipython-input-13-96c0b51e8c8e> in test_move(self, node)
38 else:
39 # Breaking cases
---> 40 self._execute_move(self.vdisk_guid, node)
41 def _execute_move(self, vdisk_guid, target_storagerouter_guid):
42 """
<ipython-input-13-96c0b51e8c8e> in _execute_move(self, vdisk_guid, target_storagerouter_guid)
54 if not task_result[0]:
55 error_msg = "Moving vdisk '{0}' with to storagerouter '{1}' failed with '{2}'".format(vdisk_guid,target_storagerouter_guid,task_result[1])
---> 56 raise RuntimeError(error_msg)
57 return task_result[0]
58
RuntimeError: Moving vdisk 'f395d9bc-b7b3-441c-8028-011fd1d4bd68' with to storagerouter 'None' failed with 'Failed to find the matching StorageDriver'
Expected api to fail. Test passed.
Normal moving case.
In [16]: VDiskTest().test_move('node2')
Succesfully moved the vdisk
Test passed.
Move to the same storagedriver
Unable to select the same storagedriver (expected).
In [16]: VDiskTest().test_move('node1')
Succesfully moved the vdisk
Test passed.
Normal moving case but a node goes down.
In [19]: VDiskTest().test_move('node2')
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-19-bc262f711436> in <module>()
----> 1 VDiskTest().test_move('node2')
<ipython-input-15-38b5d95079fc> in test_move(self, node)
36 task_success = False
37 if node in self.nodes:
---> 38 task_success = self._execute_move(self.vdisk_guid, self.nodes[node])
39 else:
40 # Breaking cases
<ipython-input-15-38b5d95079fc> in _execute_move(self, vdisk_guid, target_storagerouter_guid)
56 task_guid = self.api.post('/vdisks/{0}/move'.format(vdisk_guid), data)
57
---> 58 task_result = self.api.wait_for_task(task_id=task_guid, timeout=30)
59 if not task_result[0]:
60 error_msg = "Moving vdisk '{0}' with to storagerouter '{1}' failed with '{2}'".format(vdisk_guid,target_storagerouter_guid,task_result[1])
/opt/OpenvStorage/ci/helpers/api.py in wait_for_task(self, task_id, timeout)
251 while finished is False:
252 if timeout is not None and timeout < (time.time() - start):
--> 253 raise RuntimeError('Waiting for task {0} has timed out.'.format(task_id))
254 task_metadata = self.get('/tasks/{0}/'.format(task_id))
255 finished = task_metadata['status'] in ('FAILURE', 'SUCCESS')
RuntimeError: Waiting for task 1164f3d8-70cd-4fe4-a50c-e55240e51408 has timed out.
The timeout is the api client himself that is throwing the error.
Nov 04 14:16:45 ovs-node-1 celery[2930]: 2016-11-04 14:16:45 42300 +0100 - ovs-node-1 - 2930/140401147426560 - celery/celery.worker.job - 1180 - DEBUG - Task accepted: ovs.vdisk.move[1164f3d8-70cd-4fe4-a50c-e55240e51408] pid:2850
...
Nov 04 14:18:18 ovs-node-1 celery[2930]: 2016-11-04 14:18:18 88000 +0100 - ovs-node-1 - 28501/140401147426560 - extensions/pyrakoon_client - 1178 - WARNING - Arakoon call get took 93.45s
Nov 04 14:18:24 ovs-node-1 celery[2930]: 2016-11-04 14:18:24 87700 +0100 - ovs-node-1 - 28501/140401147426560 - lib/vdisk - 1179 - ERROR - Failed to move vDisk myvdisk01
Nov 04 14:18:24 ovs-node-1 celery[2930]: Traceback (most recent call last):
Nov 04 14:18:24 ovs-node-1 celery[2930]: File "/opt/OpenvStorage/ovs/lib/vdisk.py", line 468, in move
Nov 04 14:18:24 ovs-node-1 celery[2930]: force_restart=False)
Nov 04 14:18:24 ovs-node-1 celery[2930]: RuntimeError: failed to send XMLRPC request migrateVolume
Nov 04 14:18:25 ovs-node-1 celery[2930]: 2016-11-04 14:18:25 38800 +0100 - ovs-node-1 - 28501/140401147426560 - celery/celery.redirected - 1181 - WARNING - 2016-11-04 14:18:25 38800 +0100 - ovs-node-1 - 28501/140401147426560 - extensions/volatile mutex - 1180 - WARNING - A lock on ovs_lock_messaging was kept for 0.502228975296 sec
Nov 04 14:18:25 ovs-node-1 celery[2930]: 2016-11-04 14:18:25 39100 +0100 - ovs-node-1 - 2930/140401147426560 - celery/celery.worker.job - 1192 - ERROR - Task ovs.vdisk.move[1164f3d8-70cd-4fe4-a50c-e55240e51408] raised unexpected: Exception('Moving vDisk myvdisk01 failed',)
Nov 04 14:18:25 ovs-node-1 celery[2930]: Traceback (most recent call last):
Nov 04 14:18:25 ovs-node-1 celery[2930]: File "/usr/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
Nov 04 14:18:25 ovs-node-1 celery[2930]: R = retval = fun(*args, **kwargs)
Nov 04 14:18:25 ovs-node-1 celery[2930]: File "/usr/lib/python2.7/dist-packages/celery/app/trace.py", line 438, in __protected_call__
Nov 04 14:18:25 ovs-node-1 celery[2930]: return self.run(*args, **kwargs)
Nov 04 14:18:25 ovs-node-1 celery[2930]: File "/opt/OpenvStorage/ovs/lib/vdisk.py", line 471, in move
Nov 04 14:18:25 ovs-node-1 celery[2930]: raise Exception('Moving vDisk {0} failed'.format(vdisk.name))
Nov 04 14:18:25 ovs-node-1 celery[2930]: Exception: Moving vDisk myvdisk01 failed
The task eventually stops himself.
The vdisk did not move to node2 and GUI did not indicate it did aswell. Test passed.
Test passed.
- openvstorage 2.7.4-rev.4254.b181bf9-1 amd64 openvStorage
- openvstorage-backend 1.7.4-rev.801.33cbb60-1 amd64 openvStorage Backend plugin
- openvstorage-backend-core 1.7.4-rev.801.33cbb60-1 amd64 openvStorage Backend plugin core
- openvstorage-backend-webapps 1.7.4-rev.801.33cbb60-1 amd64 openvStorage Backend plugin Web Applications
- openvstorage-core 2.7.4-rev.4254.b181bf9-1 amd64 openvStorage core
- openvstorage-hc 1.7.4-rev.801.33cbb60-1 amd64 openvStorage Backend plugin HyperConverged
- openvstorage-sdm 1.6.4-rev.414.20eee54-1 amd64 Open vStorage Backend ASD Manager
- openvstorage-webapps 2.7.4-rev.4254.b181bf9-1 amd64 openvStorage Web Applications
Volume driver has a call to explicitly move volumes between volume drivers.
This ticket is to expose this call in the framework as API call.
In the GUI (vDisk detail page):