openvstorage / framework

The Framework is a set of components and tools which brings the user an interface (GUI / API) to setup, extend and manage an Open vStorage platform.
Other
27 stars 23 forks source link

Vdisk still present in the model when deleting the VM via VMM #606

Closed jeroenmaelbrancke closed 8 years ago

jeroenmaelbrancke commented 8 years ago

When deleting a vm via VMM (virtual machine manager) the vdisk is still present in the model. The raw disk has been deleted by the volumedriver.

ovs version:

root@cmp01:~# apt-cache policy openvstorage-backend
openvstorage-backend:
  Installed: 1.7.0-rev.619.66682a2-1

ovs-workers.log

2016-06-14 10:07:19 14600 +0200 - cmp04 - 31763/140262265288512 - celery/celery.worker.strategy - 11508 - INFO - Received task: ovs.vdisk.delete_from_voldrv[6a9e33b9-a462-4d57-98a3-8a689c783d88]
2016-06-14 10:07:19 14700 +0200 - cmp04 - 31763/140262265288512 - celery/celery.pool - 11509 - DEBUG - TaskPool: Apply <function _fast_trace_task at 0x7f91551a1c80> (args:('ovs.vdisk.delete_from_voldrv', '6a9e33b9-a462-4d57-98a3-8a
689c783d88', [], {'volumename': u'0f861fa4-b0cf-495a-b8da-1864dac04744', 'storagedriver_id': u'be-vpoolcssH2AvEzG8Dva3Q'}, {'utc': True, u'is_eager': False, 'chord': None, u'group': None, 'args': [], 'retries': 0, u'delivery_info':
 {u'priority': 0, u'redelivered': False, u'routing_key': u'generic.#', u'exchange': u'generic'}, 'expires': None, u'hostname': 'celery@cmp04', 'task': 'ovs.vdisk.delete_from_voldrv', 'callbacks': None, u'correlation_id': u'6a9e33b9
-a462-4d57-98a3-8a689c783d88', 'errbacks': None, 'timelimit': (None, None), 'taskset': None, 'kwargs': {'volumename': u'0f861fa4-b0cf-495a-b8da-1864dac04744', 'storagedriver_id': u'be-vpoolcssH2AvEzG8Dva3Q'}, 'eta': None, u'reply_t
o': u'0b3c868d-9b3c-3087-bdeb-27c727fc5a8c', 'id': '6a9e33b9-a462-4d57-98a3-8a689c783d88', u'headers': {}}) kwargs:{})
2016-06-14 10:07:19 14800 +0200 - cmp04 - 31763/140262265288512 - celery/celery.worker.job - 11510 - DEBUG - Task accepted: ovs.vdisk.delete_from_voldrv[6a9e33b9-a462-4d57-98a3-8a689c783d88] pid:25078
2016-06-14 10:07:19 27300 +0200 - cmp04 - 25078/140262265288512 - celery/celery.redirected - 11506 - WARNING - 2016-06-14 10:07:19 27300 +0200 - cmp04 - 25078/140262265288512 - log/volumedriver_task - 11505 - INFO - [ovs.lib.vdisk.
delete_from_voldrv] - [] - {"volumename": "0f861fa4-b0cf-495a-b8da-1864dac04744", "storagedriver_id": "be-vpoolcssH2AvEzG8Dva3Q"} - {"storagedriver": "14c53feb-7487-4f11-9f03-f374ceafb80f"}
2016-06-14 10:07:19 40400 +0200 - cmp04 - 25078/140262265288512 - celery/celery.redirected - 11508 - WARNING - 2016-06-14 10:07:19 40400 +0200 - cmp04 - 25078/140262265288512 - extensions/kvm sdk - 11507 - DEBUG - Init libvirt
2016-06-14 10:07:19 68900 +0200 - cmp04 - 25078/140262265288512 - celery/celery.redirected - 11510 - WARNING - 2016-06-14 10:07:19 68900 +0200 - cmp04 - 25078/140262265288512 - extensions/kvm sdk - 11509 - DEBUG - Init complete
2016-06-14 10:07:24 69600 +0200 - cmp04 - 25078/140262265288512 - lib/vdisk - 11511 - INFO - Disk test_boe.raw still exists, ignoring delete
2016-06-14 10:07:24 69700 +0200 - cmp04 - 25078/140262265288512 - celery/celery.redirected - 11513 - WARNING - 2016-06-14 10:07:24 69700 +0200 - cmp04 - 25078/140262265288512 - extensions/volatile mutex - 11512 - WARNING - A lock o
n ovs_lock_0f861fa4-b0cf-495a-b8da-1864dac04744_test_boe.raw was kept for 5.35572814941 sec
2016-06-14 10:07:24 75200 +0200 - cmp04 - 31763/140262265288512 - celery/celery.worker.job - 11511 - INFO - Task ovs.vdisk.delete_from_voldrv[6a9e33b9-a462-4d57-98a3-8a689c783d88] succeeded in 5.60460097063s: None

volumedriver.log

2016-06-14 10:07:19 008476 +0200 - cmp01 - 28196/0x00007f8514ff9700 - volumedriverfs/VolManager - 000000000000d9fd - notice - Destroy Volume, VolumeId: 0f861fa4-b0cf-495a-b8da-1864dac04744, delete local data: DeleteLocalData::T, remove volume completely RemoveVolumeCompletely::T, delete namespace DeleteVolumeNamespace::T, force deletion ForceVolumeDeletion::F, START
2016-06-14 10:07:19 008686 +0200 - cmp01 - 28196/0x00007f8514ff9700 - volumedriverfs/Volume - 000000000000d9fe - info - destroy: 0f861fa4-b0cf-495a-b8da-1864dac04744: destroying volume: delete local data DeleteLocalData::T, remove volume completely RemoveVolumeCompletely::T, force_volume deletion ForceVolumeDeletion::F
2016-06-14 10:07:19 008721 +0200 - cmp01 - 28196/0x00007f8514ff9700 - volumedriverfs/Volume - 000000000000d9ff - info - stopPrefetch_: 0f861fa4-b0cf-495a-b8da-1864dac04744: Stopping the prefetching
2016-06-14 10:07:19 008772 +0200 - cmp01 - 28196/0x00007f85157fa700 - volumedriverfs/PrefetchData - 000000000000da00 - info - run_: Woke up too early, 't was a terrible mistake
2016-06-14 10:07:19 008859 +0200 - cmp01 - 28196/0x00007f85157fa700 - volumedriverfs/PrefetchData - 000000000000da01 - info - run_: Stopping prefetch thread 
2016-06-14 10:07:19 008925 +0200 - cmp01 - 28196/0x00007f8514ff9700 - volumedriverfs/Volume - 000000000000da02 - info - destroy: 0f861fa4-b0cf-495a-b8da-1864dac04744: Unregistering volume from ClusterCache
2016-06-14 10:07:19 008961 +0200 - cmp01 - 28196/0x00007f8514ff9700 - volumedriverfs/FailOverCacheAsyncBridge - 000000000000da03 - info - newCache: newCache
2016-06-14 10:07:19 056277 +0200 - cmp01 - 28196/0x00007f8514ff9700 - volumedriverfs/MDSMetaDataBackend - 000000000000da04 - info - ~MDSMetaDataBackend: 0f861fa4-b0cf-495a-b8da-1864dac04744: used clusters: 451650
2016-06-14 10:07:19 056386 +0200 - cmp01 - 28196/0x00007f8514ff9700 - volumedriverfs/MetaDataServerDataBase - 000000000000da05 - info - drop: dropping 0f861fa4-b0cf-495a-b8da-1864dac04744
2016-06-14 10:07:19 056417 +0200 - cmp01 - 28196/0x00007f8514ff9700 - volumedriverfs/MetaDataServerRocksTable - 000000000000da06 - info - drop: 0f861fa4-b0cf-495a-b8da-1864dac04744: dropping the table
2016-06-14 10:07:19 056491 +0200 - cmp01 - 28196/0x00007f8514ff9700 - volumedriverfs/RocksLogger - 000000000000da07 - info - /mnt/ssd1/be-vpool_db_mds_1: Dropped column family with id 1

2016-06-14 10:07:19 056527 +0200 - cmp01 - 28196/0x00007f8514ff9700 - volumedriverfs/MetaDataServerTable - 000000000000da08 - info - ~Table: 0f861fa4-b0cf-495a-b8da-1864dac04744: bye
2016-06-14 10:07:19 056941 +0200 - cmp01 - 28196/0x00007f8514ff9700 - volumedriverfs/RocksLogger - 000000000000da09 - info - /mnt/ssd1/be-vpool_db_mds_1: EVENT_LOG_v1 {"time_micros": 1465891639056933, "job": 0, "event": "table_file_deletion", "file_number": 115}
2016-06-14 10:07:19 057943 +0200 - cmp01 - 28196/0x00007f8514ff9700 - volumedriverfs/DataStoreNG - 000000000000da0a - info - destroy: 0f861fa4-b0cf-495a-b8da-1864dac04744: destroying DataStore, DeleteLocalData::T
2016-06-14 10:07:19 092730 +0200 - cmp01 - 28196/0x00007f8514ff9700 - volumedriverfs/SCOCacheMountPoint - 000000000000da0b - info - removeNamespace: "/mnt/ssd1/be-vpool_write_sco_1": removing namespace 0f861fa4-b0cf-495a-b8da-1864dac04744 from mountpoint
2016-06-14 10:07:19 093259 +0200 - cmp01 - 28196/0x00007f8514ff9700 - volumedriverfs/Volume - 000000000000da0c - info - destroy: 0f861fa4-b0cf-495a-b8da-1864dac04744: Unregistering volume from ClusterCache
2016-06-14 10:07:19 093314 +0200 - cmp01 - 28196/0x00007f8514ff9700 - volumedriverfs/BackendConnectionInterfaceLogger - 000000000000da0d - info - Logger: Entering deleteNamespace 0f861fa4-b0cf-495a-b8da-1864dac04744
2016-06-14 10:07:19 102165 +0200 - cmp01 - 28196/0x00007f8514ff9700 - volumedriverfs/BackendConnectionInterfaceLogger - 000000000000da0e - info - ~Logger: Exiting deleteNamespace for 0f861fa4-b0cf-495a-b8da-1864dac04744
2016-06-14 10:07:19 102232 +0200 - cmp01 - 28196/0x00007f8514ff9700 - volumedriverfs/VolManager - 000000000000da0f - notice - Destroy Volume, VolumeId: 0f861fa4-b0cf-495a-b8da-1864dac04744, delete local data: DeleteLocalData::T, remove volume completely RemoveVolumeCompletely::T, delete namespace DeleteVolumeNamespace::T, force deletion ForceVolumeDeletion::F, FINISHED
2016-06-14 10:07:19 102266 +0200 - cmp01 - 28196/0x00007f8514ff9700 - volumedriverfs/Volume - 000000000000da10 - info - ~Volume: 0f861fa4-b0cf-495a-b8da-1864dac04744: Destructor of 0x7f850003e000
2016-06-14 10:07:19 102297 +0200 - cmp01 - 28196/0x00007f8514ff9700 - volumedriverfs/Volume - 000000000000da11 - info - stopPrefetch_: 0f861fa4-b0cf-495a-b8da-1864dac04744: Stopping the prefetching

vpool:

root@cmp01:~# ls /mnt/be-vpool/
cssH2AvEzG8Dva3Q
root@cmp01:~# ls /mnt/be-vpool/cssH2AvEzG8Dva3Q/
root@cmp01:~# logout

vdisk

khenderick commented 8 years ago

@jeroenmaelbrancke, can you try to reproduce this on the latest unstable? Since the Hypervisor and vMachine removal this code had a major rework and this issue should thus be resolved.

JeffreyDevloo commented 8 years ago

Steps

Output

2016-08-26 10:07:38 917089 +0200 - ovs-node1 - 2661/0x00007f52b6ffd700 - volumedriverfs/VolManager - 0000000000000754 -  notice - Destroy Volume, VolumeId: d19fa409-7041-4326-97fb-31dc5c3d30c0, delete local data: DeleteLocalData::T
, remove volume completely RemoveVolumeCompletely::T, delete namespace DeleteVolumeNamespace::T, force deletion ForceVolumeDeletion::F, START
2016-08-26 10:07:38 919613 +0200 - ovs-node1 - 2661/0x00007f52b6ffd700 - volumedriverfs/Volume - 0000000000000755 -  info - destroy: d19fa409-7041-4326-97fb-31dc5c3d30c0: destroying volume: delete local data DeleteLocalData::T, rem
ove volume completely RemoveVolumeCompletely::T, force_volume deletion ForceVolumeDeletion::F
2016-08-26 10:07:38 919689 +0200 - ovs-node1 - 2661/0x00007f52b6ffd700 - volumedriverfs/Volume - 0000000000000756 -  info - stopPrefetch_: d19fa409-7041-4326-97fb-31dc5c3d30c0: Stopping the prefetching
2016-08-26 10:07:38 919760 +0200 - ovs-node1 - 2661/0x00007f5318bf1700 - volumedriverfs/PrefetchData - 0000000000000757 -  info - run_: Woke up too early, 't was a terrible mistake
2016-08-26 10:07:38 919793 +0200 - ovs-node1 - 2661/0x00007f5318bf1700 - volumedriverfs/PrefetchData - 0000000000000758 -  info - run_: Stopping prefetch thread 
2016-08-26 10:07:38 919863 +0200 - ovs-node1 - 2661/0x00007f52b6ffd700 - volumedriverfs/Volume - 0000000000000759 -  info - destroy: d19fa409-7041-4326-97fb-31dc5c3d30c0: Unregistering volume from ClusterCache
2016-08-26 10:07:38 919903 +0200 - ovs-node1 - 2661/0x00007f52b6ffd700 - volumedriverfs/FailOverCacheAsyncBridge - 000000000000075a -  info - newCache: newCache
2016-08-26 10:07:38 919931 +0200 - ovs-node1 - 2661/0x00007f52b6ffd700 - volumedriverfs/FailOverCacheProxy - 000000000000075b -  info - delete_failover_dir: Setting delete_failover_dir_ to true
2016-08-26 10:07:38 919990 +0200 - ovs-node1 - 2661/0x00007f52b6ffd700 - volumedriverfs/FailOverCacheProxy - 000000000000075c -  info - unregister_: Deleting failover dir for d19fa409-7041-4326-97fb-31dc5c3d30c0
2016-08-26 10:07:38 929477 +0200 - ovs-node1 - 2661/0x00007f52b6ffd700 - volumedriverfs/MDSMetaDataBackend - 000000000000075d -  info - ~MDSMetaDataBackend: d19fa409-7041-4326-97fb-31dc5c3d30c0: used clusters: 420919
2016-08-26 10:07:38 929589 +0200 - ovs-node1 - 2661/0x00007f52b6ffd700 - volumedriverfs/MetaDataServerDataBase - 000000000000075e -  info - drop: dropping d19fa409-7041-4326-97fb-31dc5c3d30c0
2016-08-26 10:07:38 929626 +0200 - ovs-node1 - 2661/0x00007f52b6ffd700 - volumedriverfs/MetaDataServerRocksTable - 000000000000075f -  info - drop: d19fa409-7041-4326-97fb-31dc5c3d30c0: dropping the table
2016-08-26 10:07:38 929691 +0200 - ovs-node1 - 2661/0x00007f52b6ffd700 - volumedriverfs/RocksLogger - 0000000000000760 -  info - /mnt/hdd1/vm-vpool_db_mds_1: Dropped column family with id 6

2016-08-26 10:07:38 929731 +0200 - ovs-node1 - 2661/0x00007f52b6ffd700 - volumedriverfs/MetaDataServerTable - 0000000000000761 -  info - ~Table: d19fa409-7041-4326-97fb-31dc5c3d30c0: bye
2016-08-26 10:07:38 929860 +0200 - ovs-node1 - 2661/0x00007f52b6ffd700 - volumedriverfs/RocksLogger - 0000000000000762 -  info - /mnt/hdd1/vm-vpool_db_mds_1: EVENT_LOG_v1 {"time_micros": 1472198858929848, "job": 0, "event": "table_
file_deletion", "file_number": 908}
2016-08-26 10:07:38 929977 +0200 - ovs-node1 - 2661/0x00007f52b6ffd700 - volumedriverfs/RocksLogger - 0000000000000763 -  info - /mnt/hdd1/vm-vpool_db_mds_1: EVENT_LOG_v1 {"time_micros": 1472198858929968, "job": 0, "event": "table_
file_deletion", "file_number": 907}
2016-08-26 10:07:38 930967 +0200 - ovs-node1 - 2661/0x00007f52b6ffd700 - volumedriverfs/DataStoreNG - 0000000000000764 -  info - destroy: d19fa409-7041-4326-97fb-31dc5c3d30c0: destroying DataStore, DeleteLocalData::T
2016-08-26 10:07:38 944991 +0200 - ovs-node1 - 2661/0x00007f52b6ffd700 - volumedriverfs/SCOCacheMountPoint - 0000000000000765 -  info - removeNamespace: "/mnt/hdd4/vm-vpool_write_sco_1": removing namespace d19fa409-7041-4326-97fb-3
1dc5c3d30c0 from mountpoint
2016-08-26 10:07:38 946074 +0200 - ovs-node1 - 2661/0x00007f52b6ffd700 - volumedriverfs/Volume - 0000000000000766 -  info - destroy: d19fa409-7041-4326-97fb-31dc5c3d30c0: Unregistering volume from ClusterCache
2016-08-26 10:07:38 946145 +0200 - ovs-node1 - 2661/0x00007f52b6ffd700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000000767 -  info - Logger: Entering deleteNamespace d19fa409-7041-4326-97fb-31dc5c3d30c0
2016-08-26 10:07:38 957206 +0200 - ovs-node1 - 2661/0x00007f52b6ffd700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000000768 -  info - ~Logger: Exiting deleteNamespace for d19fa409-7041-4326-97fb-31dc5c3d30c0
2016-08-26 10:07:38 957277 +0200 - ovs-node1 - 2661/0x00007f52b6ffd700 - volumedriverfs/VolManager - 0000000000000769 -  notice - Destroy Volume, VolumeId: d19fa409-7041-4326-97fb-31dc5c3d30c0, delete local data: DeleteLocalData::T, remove volume completely RemoveVolumeCompletely::T, delete namespace DeleteVolumeNamespace::T, force deletion ForceVolumeDeletion::F, FINISHED

Also deleted form the GUI:

selection_050 And from the vpool

root@ovs-node1:~# ls /mnt/vm-vpool/
vm-vdisk.raw

Test result

Test passed. Could not reproduce the issue. Closing.