Closed basak closed 1 month ago
Hi @basak
What does cat /var/snap/lxd/common/lxd/logs/lxd.log
show?
Does it show something like this?
time="2024-08-02T12:54:23Z" level=error msg="Failed to start the daemon" err="Failed applying patch \"storage_prefix_bucket_names_with_project\": Failed applying patch to pool \"default\": Failed to list directory \"/var/snap/lxd/common/lxd/storage-pools/default/buckets\" for volume type \"buckets\": open /var/snap/lxd/common/lxd/storage-pools/default/buckets: no such file or directory"
@basak FWIW I removed LXD using snap lxd --purge
and installed LXD from 5.21/stable and latest/stable and the sid
container started but had the issue that the systemd inside it did not start properly:
root@v1:~# lxc ls
+------+---------+------+-----------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+---------+------+-----------------------------------------------+-----------+-----------+
| foo | RUNNING | | fd42:b778:3c14:c97a:216:3eff:fe47:9588 (eth0) | CONTAINER | 0 |
+------+---------+------+-----------------------------------------------+-----------+-----------+
root@v1:~# lxc shell foo
root@foo:~# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.7 19324 7648 ? Ss 12:58 0:00 /sbin/init
root 30 0.0 0.3 9660 3668 pts/1 Ss 12:58 0:00 su -l
root 31 0.0 0.4 7260 4104 pts/1 S 12:58 0:00 -bash
root 35 0.0 0.3 8180 3624 pts/1 R+ 12:58 0:00 ps aux
I suspect this is due to the sid container wanting cgroupv2 but Focal only providing cgroupv1. Is there a way to enable cgroupv2 in Focal do you know?
BTW steps 3 and 4 arent needed anymore as 4.0.10 includes the new remote by default.
@basak re Debian Sid on Focal, I suspect the issue is the same as for Ubuntu Oracular, the use of systemd v256 which removes cgroupv1 support, see https://github.com/canonical/lxd/issues/13844#issuecomment-2268632337 for more info.
It looks like this commit, which adds the buckets
directory for storage pools was backported to 5.0 but not 4.0 (introduced in 5.0.2). I would expect that upgrades from pre-5.0.2 are affected by this as well.
Still meandering through the dir storage driver for a solution here; more tomorrow.
Fixed by https://github.com/canonical/lxd/pull/13957 will backport into 5.21
Required information
Linux rbasak-lxd 5.4.0-189-generic #209-Ubuntu SMP Fri Jun 7 14:05:13 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
5.21.2 LTS
5.21.2 LTS
Issue description
Following on from #13806 I tried updating the snap to
5.21/stable
. This caused lxc to fail entirely.Steps to reproduce
uvt-kvm create --memory=1024 rbasak-lxd release=focal arch=amd64
with this image:release=focal arch=amd64 label=release (20240710)
, thenuvt-kvm wait rbasak-lxd
, thenuvt-kvm ssh rbasak-lxd
.sudo lxd init
, specifying all defaults except that I used the "dir" storage type.lxc remote rm images
lxc remote add images https://images.lxd.canonical.com --protocol=simplestreams
lxc launch images:debian/sid/amd64 foo
sudo snap refresh --channel=5.21/stable lxd
lxc list
Expected results: listing of the one container I created previously.
Actual results:
Error: LXD unix socket not accessible: Get "http://unix.socket/1.0": EOF
Logging out of the VM and back in again, and trying
lxc list
again, I get:Error: LXD unix socket "/var/snap/lxd/common/lxd/unix.socket" not accessible: Get "http://unix.socket/1.0": dial unix /var/snap/lxd/common/lxd/unix.socket: connect: connection refused
Rebooting the VM doesn't help. After that, I get:
Error: LXD unix socket not accessible: Get "http://unix.socket/1.0": EOF