Closed theoratkin closed 2 years ago
sounds a lot like this one: https://github.com/LINBIT/linstor-proxmox/commit/90c638dfd95cfecf4bc926e7e662c6a47d1f0990
can you please enable the cache and make sure it is set on all (otherwise it does not work) of the drbd ones in your storage.cfg
. You should then see /var/cache/linstor-proxmox/pools
beeing created, otherwise you have to reload some proxmox services/reboot.
This is it, thank you so much! Setting statuscache
fixed the problem (I set it to 60 seconds). Closing the issue.
Maybe it makes sense to enable this setting by default? I don't see any downsides in doing so, not unless Proxmox is going to change its behavior any time soon.
This was a "quick fix" for a customer back then and as it changes behavior, I did hide it behind an extra option.
My hope in the - hm - medium run is that maybe LINSTOR get's more efficient and implements such a cache itself. Then the plugins would not need to implement them.
We have a 3-node Linstor cluster. Currently it has a storage pool
pool_fast
(it's on SSDs).We want to add 2 new storage pools
pool_raid
andpool_big
, which are RAID0 HDDs with larger capacity.All pools are of type
lvmthin
.For some reason, when creating those 2 extra storage pools, all volume creation with Linstor slows down, regardless of which storage pool the volume is being created on; even creating on
pool_fast
becomes slower. When creating a 10G volume directly using Linstor CLI it goes approximately from 5 seconds to 15. Which is fine by itself, 10 extra seconds is not such a big deal.But when trying to do the same with Proxmox - it slows down significantly, from ~20 seconds to 50-60 seconds. And even worse: most of the time it errors out with the following text:
Is there any way to track down those slow downs? Or at least increase the timeout, currently it seems to be at 1 minute, which is apparently too little.