Closed AndersTrier closed 1 year ago
What is going on?
/dev/nvme0n1p3
should have enough free space to handle a 100GB allocation. Am I missing something?
Wait, LXD is using a file as backing storage? I never agreed to that.
# pvs
PV VG Fmt Attr PSize PFree
/dev/loop6 default lvm2 a-- 4.65g 0
/dev/md127 MyVolGroup lvm2 a-- <10.92t 0
/dev/nvme0n1p3 vg0 lvm2 a-- <1.82t <311.59g
# losetup --list
NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO LOG-SEC
/dev/loop6 0 0 0 0 /var/snap/lxd/common/lxd/disks/default.img 1 512
Nevermind. I'll read the documentation and figure out how to make lxc use an existing Volume group. It would have been nice to have been asked about that during lxd init.
This page on the LVM tab shows you how to create a pool from an existing volume group:
https://linuxcontainers.org/lxd/docs/master/howto/storage_pools/
I think you could also do this during lxd init
bt specifying the volume group as the source when it asks.
Hi @tomponline Thank you for all your work on LXC/LXD!
This is what I ended up doing:
lvcreate --thin --size 150GB vg0 -n lxd-thin-pool
lxc storage create nvmepool lvm source=vg0 lvm.vg.force_reuse=true lvm.thinpool_name=lxd-thin-pool
LXD assumes that it has full control over the volume group. Therefore, you should not maintain any file system entities that are not owned by LXD in an LVM volume group, because LXD might delete them. However, if you need to reuse an existing volume group (for example, because your setup has only one volume group), you can do so by setting the lvm.vg.force_reuse configuration.
https://linuxcontainers.org/lxd/docs/master/reference/storage_lvm/
I think using an existing volume group with existing LVs is useful. I get that you worry about accidentally deleting exiting LVs, but how about naming LVs managed by LXD something like: LXD_Managed_Do_Not_Touch_<container name>_<UUID>
?
Thank you for all your work on LXC/LXD!
Thanks! :)
I think using an existing volume group with existing LVs is useful.
Indeed that is why we have lvm.vg.force_reuse=true
only for LVM pools because it is recognised some users have systems that can only have one volume group (perhaps pre-provisioned by an ISP).
I get that you worry about accidentally deleting exiting LVs, but how about naming LVs managed by LXD something like:
LXD_Managed_Do_Not_Touch_<container name>_<UUID>
We do use somewhat unlikely to occur names such as containers_<instance_name>
.
However in theory at least, which ever naming scheme we use runs the risk of overlap with an existing user's volumes.
It is somewhat academic though as changing the naming scheme now would be rather complex and disruptive for existing users (potentially breaking existing workflows) and is something we would be unlikely to do.
Required information
Issue description
Resize root partition of container using LVM as storage backend.
# lxc --debug config device override testcontainer root size=100GB
Output: https://pastebin.com/mhtKu3dpAll seems fine:
If I start using the new container, I'll soon start to experience problems
Backing storage is a 2TB nvme disk.
Variuos other dmesg outputs:
Another attempt