Closed jsnjack closed 4 years ago
Do you have that source path in /dev/local/?
@tomponline giving you this one to take a look at
No, /dev/local/ does not exist and /var/snap/lxd/common/lxd/storage-pools/local/containers/c1 directory is empty
@jsnjack can you post output of lvs
please.
Also @stgraber I notice the source
property is missing from the storage pool config in this case, so this is likely the issue, is this expected with a cluster config?
Output for lvs command:
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
LXDThinPool local twi---tz-- <11.97g
containers_c1 local Vwi---tz-- <9.32g LXDThinPool images_793ac61f572b7512805b91795936e5dfbd4608b312a399b95b74ecf42f35c402
containers_c3 local Vwi---tz-- <9.32g LXDThinPool images_793ac61f572b7512805b91795936e5dfbd4608b312a399b95b74ecf42f35c402
images_793ac61f572b7512805b91795936e5dfbd4608b312a399b95b74ecf42f35c402 local Vwi---tz-- <9.32g LXDThinPool
OK so the container volumes are there, thats good.
How did you create the storage pool local
? As I believe the issue is that the container's volumes are not marked as 'active' (should have an 'a' in the Attr col), which in turn is why they are not in /dev/local
Perhaps the volume group itself is not active, can you show the output of vgs
please.
Also @stgraber I notice the
source
property is missing from the storage pool config in this case, so this is likely the issue, is this expected with a cluster config?
I notice that this is an old driver behaviour we should probably handle in the new driver, if lvm.vg_name
is missing from the config, assume it is the storage pool name.
Anyway, apparently running vgchange -ay
with an empty argument activates all of the volume groups so this shouldnt be preventing volumes from being accessible.
client@node2:~$ sudo vgs
WARNING: PV /dev/loop2 in VG local is using an old PV header, modify the VG to update.
VG #PV #LV #SN Attr VSize VFree
local 1 4 0 wz--n- <13.97g 0
I created the storage pool by following the instructions from lxd init command
@jsnjack if you run vgchange -ay local
and then repeat the run of vgs
please can you show the output.
Also does that then show files in /dev/local?
Thanks @tomponline! That solved the issue:
client@node2:~$ sudo vgchange -ay local
WARNING: PV /dev/loop2 in VG local is using an old PV header, modify the VG to update.
4 logical volume(s) in volume group "local" now active
client@node2:~$ sudo vgs
WARNING: PV /dev/loop2 in VG local is using an old PV header, modify the VG to update.
VG #PV #LV #SN Attr VSize VFree
local 1 4 0 wz--n- <13.97g 0
client@node2:~$ cd /dev/local/
client@node2:/dev/local$ ls
containers_c1 containers_c3 images_793ac61f572b7512805b91795936e5dfbd4608b312a399b95b74ecf42f35c402
client@node2:/dev/local$ lxc list
+------+---------+-----------------------+------+-----------+-----------+----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | LOCATION |
+------+---------+-----------------------+------+-----------+-----------+----------+
| c1 | RUNNING | 10.192.128.206 (eth0) | | CONTAINER | 0 | node2 |
+------+---------+-----------------------+------+-----------+-----------+----------+
| c2 | STOPPED | | | CONTAINER | 0 | node3 |
+------+---------+-----------------------+------+-----------+-----------+----------+
| c3 | RUNNING | 10.192.128.216 (eth0) | | CONTAINER | 0 | node2 |
+------+---------+-----------------------+------+-----------+-----------+----------+
@jsnjack great! The question is why didn't LXD do that for you on start, can you reboot the machine and try starting LXD again and see if that activates them this time. If not can you try running just vgchange -ay
and then repeat the vgs
command, as I'm wondering if it needs the specific volume group name added to it (and I've already identified an issue with that not occurring).
after running sudo vgchange -ay local
command, containers keep starting automatically after the reboot. :)
Many thanks for such a fast response @tomponline and @stgraber!
@jsnjack you're welcome, I'm going to keep this open a little longer to put up a patch that I think will avoid this occurring in the future.
Required information
Issue description
Container with lvm storage doesn't survive the server reboot
Steps to reproduce
Information to attach
lxc start output:
Container log is empty.