Closed acidvegas closed 1 month ago
If it is related to leftover LXD issues, I would like to know the proper way to rename all the lxdbr0 interfaces on the machine and in the incus configs to incusbr0 for uniformity.
But I am at a loss on debugging this one.
I tried without the custom rc.local commands aswell, same issue
Log suggests another cgroup issue.
Please show: cat /proc/self/mountinfo
Log suggests another cgroup issue.
Please show:
cat /proc/self/mountinfo
I already did in my previous message:
08:53:21 brandon@r320-1 ~ : cat /proc/self/mountinfo
21 28 0:20 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw
22 28 0:21 / /sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw
23 28 0:5 / /dev rw,nosuid,noexec - devtmpfs devtmpfs rw,size=32838940k,nr_inodes=8209735,mode=755,inode64
24 23 0:22 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=000
25 23 0:23 / /dev/shm rw,nosuid,nodev,noexec - tmpfs tmpfs rw,inode64
26 28 0:24 / /run rw,nosuid,nodev,noexec - tmpfs tmpfs rw,mode=755,inode64
28 1 0:26 / / rw,relatime - zfs zroot/ROOT/void rw,xattr,posixacl,casesensitive
29 22 0:6 / /sys/kernel/security rw,relatime - securityfs securityfs rw
30 22 0:27 / /sys/firmware/efi/efivars rw,nosuid,nodev,noexec,relatime - efivarfs efivarfs rw
31 22 0:28 / /sys/fs/cgroup rw,relatime - cgroup2 cgroup2 rw,nsdelegate
32 28 0:29 / /home rw,relatime - zfs zroot/home rw,xattr,posixacl,casesensitive
33 28 8:65 / /boot/efi rw,relatime - vfat /dev/sde1 rw,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,utf8,errors=remount-ro
34 31 0:30 / /sys/fs/cgroup/systemd rw,relatime - cgroup systemd rw,name=systemd
35 28 0:31 / /var/lib/incus/shmounts rw,relatime shared:1 - tmpfs tmpfs rw,size=100k,mode=711,inode64
36 28 0:32 / /var/lib/incus/guestapi rw,relatime - tmpfs tmpfs rw,size=100k,mode=755,inode64
Here are the 2 lines of interest:
31 22 0:28 / /sys/fs/cgroup rw,relatime - cgroup2 cgroup2 rw,nsdelegate
34 31 0:30 / /sys/fs/cgroup/systemd rw,relatime - cgroup systemd rw,name=systemd
I have CGROUP_MODE=unified in rc.conf (not local!) and no mounting of systemd in rc.local.
This is also whats currently recommended in void docs which I'm working on updating so if this doesn't work for you, I'd like to know.
I have CGROUP_MODE=unified in rc.conf (not local!) and no mounting of systemd in rc.local.
This is also whats currently recommended in void docs which I'm working on updating so if this doesn't work for you, I'd like to know.
Ok so when I do that it is working again I guess. Only issue I am seeing still is containers do not have an IP when they start from boot. I have to incus restart
the container and then it will get an IP. Again this is only affecting my servers I have done lxd-to-incus
in the past.
Right, the over-mounting of the cgroup2 hierarchy is an invalid cgroup setup, so it's not unexpected that things get confused and fail. No idea why that would only affect a system that went through lxd-to-incus given that this is a system-wide setting completely outsied of Incus.
The network issue may be a similar problem. I don't see how the fact that the container was created under LXD matters for that, if all containers fail to get an IP, it suggests an issue with the system network management tooling, either firewalling things off or badly interacting with the veth devices that get created for the containers on startup (sounds most likely given an instance restart fixes it).
It could also be some kind of race within the instance, in which case incus console --show-log NAME
may be interesting.
Closing as there's so far no indication of an Incus bug here.
Currently running Void Linux. There was an issue a few versions back with CGROUPS on Void Linux expecting systemd, which a solution was given to me by @stgraber to run the following:
This was placed in my
/etc/rc.local
file to run on startup, along withCGROUP='unified'
, which fixed the issues I was PREVIOUSLY having with Incus.But NOW, after updating my system, all of my Inucs containers on all of my servers are now failing to start, leaving all of my services completely down and inaccessible once again.
Anytime I try to start a container I get:
I should mention, the only machines this is affecting are machines I previous did lxd-to-incus on. I have a server that only was setup with incus, and never had to do the lxd to incus migration, and that one starts my containers perfectly fine.
So this is most likely related to leftover shit from LXD, maybe the interface name?
Here are some commands I have included the output of to try and help debug:
REDACTED OTHER INTERFACES HERE. eno1 is my ethernet