Open dniasoff opened 7 months ago
Thanks for the report, we are aware of this issue with newer ZFS versions and are already working on a fix for this issue.
In order to get LINSTOR out of the resizing state, I'd suggest the following steps:
100GiB
, feel free to skip this step and use the very same size argument again in the next step.
To find out the exact size of a VD is to check linstor -m vd l -r $rsc_name
and look for volume_definitions[$your_vlm_nr].size_kib
. If you have jq installed and only have a single VD for the given resource, you could use something like this:linstor -m vd l -r $rsc_name | jq '.[0][0].volume_definitions[0].size_kib'
.KiB
suffix for the size in the next step.linstor vd size ...
command.Thanks for a detailed and helpful response.
For now I have configured the storage pool to use a volblocksize of 4k which seems to solve the problem. I am guessing this might have an impact on disk usage/performance so look forward to the fix. This means any volumes already created before the fix will be stuck with a 4k volblocksize.
Hi,
Using Linbit Server 1.27.0 on Ubuntu 22.04 with OpenZFS 2.2.2.
I have it integrated with OpenStack Cinder.
When creating a volume from an image with volume image cache enabled - part of the workflow creates a small volume and resizing it.
However, I am getting the following error inside Linbit.
The volblocksize is a default setting on my system - I haven't changed it anywhere.
Can I suggest the neatest approach would be for Linstor to check the volblocksize and then roundup the requested size.
Thanks
Daniel