Closed kinvaris closed 8 years ago
@kinvaris first you should also address this in your healthchcheck. Only delete once the disk is in the model.
next to be fixed also as user might create and delete fast after each other.
2016-07-15 14:06:44 85300 +0200 - ovs100 - 27770/140474725435200 - celery/celery.worker.job - 621 - ERROR - Task ovs.vdisk.resize_from_voldrv[974aab38-3bab-44e2-8e83-89deae08e3b7] raised unexpected: ObjectNotFoundException('Throw location unknown (consider using BOOST_THROW_EXCEPTION)\nDynamic exception type: volumedriverfs::ObjectNotRegisteredException\nstd::exception::what: std::exception\n[volumedriverfs::tag_volume_id*] = 1046deab-7c10-42cb-9cfa-30a39e3b7988\n ',)
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/helpers/decorators.py", line 69, in new_function
return function(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/vdisk.py", line 172, in resize_from_voldrv
VDiskController._set_vdisk_metadata_pagecache_size(vdisk)
File "/opt/OpenvStorage/ovs/lib/vdisk.py", line 1145, in _set_vdisk_metadata_pagecache_size
vdisk.storagedriver_client.set_metadata_cache_capacity(str(vdisk.volume_id), num_pages)
ObjectNotFoundException: Throw location unknown (consider using BOOST_THROW_EXCEPTION)
Dynamic exception type: volumedriverfs::ObjectNotRegisteredException
std::exception::what: std::exception
[volumedriverfs::tag_volume_id*] = 1046deab-7c10-42cb-9cfa-30a39e3b7988
Fixed by #728:
Info for QA: Reproduced and validated this with snippet for i in {1..100}; do truncate -s 10G /mnt/vpool/test-$i.raw; rm -rf /mnt/vpool/test-$i.raw; done
PASSED:
for i in {1..1000}; do truncate -s 10G /mnt/myvpool/test-$i.raw; rm -rf /mnt/myvpool/test-$i.raw; done
(PASSED)3263cfc6-847e-4f07-9c57-abb32ae83345
aka test-608.raw
) (PASSED)
Package version:
ii openvstorage 2.7.2-rev.3867.ec9d46d-1 amd64 openvStorage
ii openvstorage-backend 1.7.2-rev.675.37ca5b8-1 amd64 openvStorage Backend plugin
ii openvstorage-backend-core 1.7.2-rev.675.37ca5b8-1 amd64 openvStorage Backend plugin core
ii openvstorage-backend-webapps 1.7.2-rev.675.37ca5b8-1 amd64 openvStorage Backend plugin Web Applications
ii openvstorage-cinder-plugin 1.2.2-rev.32.948a8c1-1 amd64 OpenvStorage Cinder plugin for OpenStack
ii openvstorage-core 2.7.2-rev.3867.ec9d46d-1 amd64 openvStorage core
ii openvstorage-hc 1.7.2-rev.675.37ca5b8-1 amd64 openvStorage Backend plugin HyperConverged
ii openvstorage-health-check 2.0.0-rev.117.3212ea9-1 amd64 Open vStorage HealthCheck
ii openvstorage-sdm 1.6.2-rev.330.f06c8de-1 amd64 Open vStorage Backend ASD Manager
ii openvstorage-webapps 2.7.2-rev.3867.ec9d46d-1 amd64 openvStorage Web Applications
Branch: unstable
Setup: 3 node setup
The healthcheck truncates a volume on a vpool to test the flow. Then it deletes it again. This can happen very fast and sometimes the DAL objects are not deleted. Due to this we get DAL vdisk objects with no name, broken objects, ...
This is noticed on Fargo, but I will retest this on Unstable to confirm