Closed acidvegas closed 5 months ago
What version of Incus is that?
I certainly remember implementing and testing void support in lxd-to-incus but maybe that change isn't in your version yet.
What version of Incus is that?
I certainly remember implementing and testing void support in lxd-to-incus but maybe that change isn't in your version yet.
0.5.1, the latest version in the void repos. I did see this PR but made this issue cause it seems the issue persists.
If there is any logs or information you need, lmk.
Thank you for the timely response.
Would seem as though the void repo is out of date
https://github.com/lxc/incus/actions/runs/8304297262/artifacts/1331261213 should get you the static binaries for the current main branch here which should include the latest lxd-to-incus
https://github.com/lxc/incus/actions/runs/8304297262/artifacts/1331261213 should get you the static binaries for the current main branch here which should include the latest lxd-to-incus
The world needs more developers like you bredda. Cheers
Looks like it is in for a PR. My apologize for not investigating this more https://github.com/void-linux/void-packages/pull/49265
No worries, glad it's working with the current version!
03:46:24brandon@paloalto-dev-34 ~ : lxd-to-incus
=> Looking for source server
==> Detected: xbps
=> Looking for target server
==> Detected: xbps
=> Connecting to source server
=> Connecting to the target server
=> Checking server versions
==> Source version: 5.20
==> Target version: 0.6
=> Validating version compatibility
=> Checking that the source server isn't empty
=> Checking that the target server is empty
Error: Target server isn't empty (storage pools found), can't proceed with migration.
even after deleteing the default zfs storage pool that was on incus, it then says:
Error: Target server isn’t empty (networks found), can’t proceed with migration.
This is on incus 0.6
With the current version of the code, the only way this would happen is if incus storage list
does contain a storage pool.
Can you show:
05:52:15 root@blackhole ~ : incus project list
+-------------------+--------+----------+-----------------+-----------------+----------+---------------+-----------------------+---------+
| NAME | IMAGES | PROFILES | STORAGE VOLUMES | STORAGE BUCKETS | NETWORKS | NETWORK ZONES | DESCRIPTION | USED BY |
+-------------------+--------+----------+-----------------+-----------------+----------+---------------+-----------------------+---------+
| default (current) | YES | YES | YES | YES | YES | YES | Default Incus project | 2 |
+-------------------+--------+----------+-----------------+-----------------+----------+---------------+-----------------------+---------+
05:52:29 root@r620 ~ : incus storage list
+---------+--------+----------------------------------+-------------+---------+---------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE |
+---------+--------+----------------------------------+-------------+---------+---------+
| default | zfs | /var/lib/incus/disks/default.img | | 1 | CREATED |
+---------+--------+----------------------------------+-------------+---------+---------+
05:52:34 root@r620 ~ : incus network list
+----------+----------+---------+----------------+---------------------------+-------------+---------+---------+
| NAME | TYPE | MANAGED | IPV4 | IPV6 | DESCRIPTION | USED BY | STATE |
+----------+----------+---------+----------------+---------------------------+-------------+---------+---------+
| eno1 | physical | NO | | | | 0 | |
+----------+----------+---------+----------------+---------------------------+-------------+---------+---------+
| eno2 | physical | NO | | | | 0 | |
+----------+----------+---------+----------------+---------------------------+-------------+---------+---------+
| eno3 | physical | NO | | | | 0 | |
+----------+----------+---------+----------------+---------------------------+-------------+---------+---------+
| eno4 | physical | NO | | | | 0 | |
+----------+----------+---------+----------------+---------------------------+-------------+---------+---------+
| incusbr0 | bridge | YES | 10.25.101.1/24 | fd42:8419:29de:d411::1/64 | | 1 | CREATED |
+----------+----------+---------+----------------+---------------------------+-------------+---------+---------+
| lxdbr0 | bridge | NO | | | | 0 | |
+----------+----------+---------+----------------+---------------------------+-------------+---------+---------+
05:52:38 root@r620 ~ : incus profile show default
config: {}
description: Default Incus profile
devices:
eth0:
name: eth0
network: incusbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
used_by: []
Right so that's indeed not a clean Incus server, do:
And then run lxd-to-incus
again.
lxd-to-incus
Thank you bredda. Just needed clarification if that was how I was supposed to proceed.
Last issue I am encountering on void, after doing that when I try to start the container I get:
[services@blackhole ~]$ incus start elasticsearch-container
Error: Error occurred when starting proxy device: Error: No such file or directory - Failed to safely open namespace file descriptor based on pidfd 3
Try `incus info --show-log elasticsearch-container` for more info
Can you show incus config show elasticsearch-container
and uname -a
?
architecture: x86_64
config:
boot.autostart: "true"
image.architecture: amd64
image.description: Debian bookworm amd64 (20240228_05:24)
image.os: Debian
image.release: bookworm
image.serial: "20240228_05:24"
image.type: squashfs
image.variant: default
limits.kernel.memlock: "9223372036854775807"
limits.kernel.nofile: "65535"
volatile.base_image: b9a12bf99efdac578271b4a3e616e8cd3dec33faa2baff7923d2d6ca79ed8993
volatile.cloud-init.instance-id: c6a9f533-a1de-4f56-a66f-a62336684579
volatile.eth0.hwaddr: 00:16:3e:8e:df:93
volatile.idmap.base: "0"
volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
volatile.last_state.idmap: '[]'
volatile.last_state.power: STOPPED
volatile.uuid: 3095c9e3-3c33-4291-bf4e-1bbab4156e22
volatile.uuid.generation: 3095c9e3-3c33-4291-bf4e-1bbab4156e22
devices:
elasticsearch-http-port:
connect: tcp:10.109.174.63:9200
listen: tcp:0.0.0.0:1338
type: proxy
elasticsearch-trans-port:
connect: tcp:10.109.174.63:9300
listen: tcp:0.0.0.0:1337
type: proxy
eth0:
ipv4.address: 10.109.174.63
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: elasticsearch-pool
type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
Linux r320-2 6.6.32_1 #1 SMP PREEMPT_DYNAMIC Tue May 28 23:00:20 UTC 2024 x86_64 GNU/Linux
There seems to be something going on with the Incus build on void, either because of the C library used or because of the kernel which is breaking pidfds. That's effectively out of scope for us as that's a distro-specific issue so something you may need to report to the void packager for Incus.
That said, given your config above, I'd recommend doing:
Which should take you away from using forkproxy
(the bit that's using pidfds) and instead onto using kernel based firewalling (nftables or xtables) which will be faster and should work just fine in your case.
09:57:29 root@r320-2 /home/acidvegas : incus config device set elasticsearch-container elasticsearch-http-port nat=true
Error: Invalid devices: Device validation failed for "elasticsearch-http-port": Cannot listen on wildcard address "0.0.0.0" when in nat mode
if this is something leftover from a bad lxd-to-incus build can i just rm these and re-add them maybe?
Luckily i only have to do the lxd to incus transition one time haha.
Ah, that's interesting, I thought we did support wildcard listen address for NAT mode. Do you have multiple IP addresses that you need those two proxy devices to listen on on the host side?
If not, changing the 0.0.0.0
to the address you want on the host should do the trick.
When using LXD I was just making it so incomming on port 1337 would forward to port 9200 inside the container.
My IP may change at times so thats why I was using 0.0.0.0
Okay, so yeah, you'd definitely benefit from forkproxy
working properly.
I don't know much about void, but all our tests for setups like yours are passing fine so it's got to be something going on with void. Kernel is unlikely as that's not a kernel build option and your kernel is pretty recent, so something related to the C library would be my guest.
Do you know if your system is using musl or glibc?
glibc, yes.
Im not sure if this helps:
Log:
lxc elasticsearch-container 20240605011615.690 INFO lxccontainer - ../src/lxc/lxccontainer.c:do_lxcapi_start:997 - Set process title to [lxc monitor] /var/lib/incus/containers elasticsearch-container
lxc elasticsearch-container 20240605011615.691 INFO start - ../src/lxc/start.c:lxc_check_inherited:325 - Closed inherited fd 4
lxc elasticsearch-container 20240605011615.691 INFO start - ../src/lxc/start.c:lxc_check_inherited:325 - Closed inherited fd 5
lxc elasticsearch-container 20240605011615.691 INFO start - ../src/lxc/start.c:lxc_check_inherited:325 - Closed inherited fd 6
lxc elasticsearch-container 20240605011615.691 INFO start - ../src/lxc/start.c:lxc_check_inherited:325 - Closed inherited fd 16
lxc elasticsearch-container 20240605011615.691 INFO lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver nop
lxc elasticsearch-container 20240605011615.691 INFO conf - ../src/lxc/conf.c:run_script_argv:340 - Executing script "/proc/1057/exe callhook /var/lib/incus "default" "elasticsearch-container" start" for container "elasticsearch-container"
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "[all]"
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "reject_force_umount # comment this to allow umount -f; not recommended"
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "[all]"
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "kexec_load errno 38"
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[246:kexec_load] action[327718:errno] arch[0]
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[246:kexec_load] action[327718:errno] arch[1073741827]
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[246:kexec_load] action[327718:errno] arch[1073741886]
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "open_by_handle_at errno 38"
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[304:open_by_handle_at] action[327718:errno] arch[0]
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[304:open_by_handle_at] action[327718:errno] arch[1073741827]
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[304:open_by_handle_at] action[327718:errno] arch[1073741886]
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "init_module errno 38"
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[175:init_module] action[327718:errno] arch[0]
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[175:init_module] action[327718:errno] arch[1073741827]
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[175:init_module] action[327718:errno] arch[1073741886]
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "finit_module errno 38"
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[313:finit_module] action[327718:errno] arch[0]
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[313:finit_module] action[327718:errno] arch[1073741827]
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[313:finit_module] action[327718:errno] arch[1073741886]
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "delete_module errno 38"
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[176:delete_module] action[327718:errno] arch[0]
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[176:delete_module] action[327718:errno] arch[1073741827]
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[176:delete_module] action[327718:errno] arch[1073741886]
lxc elasticsearch-container 20240605011615.731 INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:1017 - Merging compat seccomp contexts into main context
lxc elasticsearch-container 20240605011615.731 INFO start - ../src/lxc/start.c:lxc_init:881 - Container "elasticsearch-container" is initialized
lxc elasticsearch-container 20240605011615.732 INFO cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_monitor_create:1383 - The monitor process uses "lxc.monitor.elasticsearch-container" as cgroup
lxc elasticsearch-container 20240605011615.756 INFO cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_payload_create:1491 - The container process uses "lxc.payload.elasticsearch-container" as inner and "lxc.payload.elasticsearch-container" as limit cgroup
lxc elasticsearch-container 20240605011615.764 INFO start - ../src/lxc/start.c:lxc_spawn:1762 - Cloned CLONE_NEWUSER
lxc elasticsearch-container 20240605011615.765 INFO start - ../src/lxc/start.c:lxc_spawn:1762 - Cloned CLONE_NEWNS
lxc elasticsearch-container 20240605011615.765 INFO start - ../src/lxc/start.c:lxc_spawn:1762 - Cloned CLONE_NEWPID
lxc elasticsearch-container 20240605011615.765 INFO start - ../src/lxc/start.c:lxc_spawn:1762 - Cloned CLONE_NEWUTS
lxc elasticsearch-container 20240605011615.765 INFO start - ../src/lxc/start.c:lxc_spawn:1762 - Cloned CLONE_NEWIPC
lxc elasticsearch-container 20240605011615.771 INFO conf - ../src/lxc/conf.c:lxc_map_ids:3603 - Caller maps host root. Writing mapping directly
lxc elasticsearch-container 20240605011615.771 NOTICE utils - ../src/lxc/utils.c:lxc_drop_groups:1368 - Dropped supplimentary groups
lxc elasticsearch-container 20240605011615.772 WARN cgfsng - ../src/lxc/cgroups/cgfsng.c:fchowmodat:1611 - No such file or directory - Failed to fchownat(44, memory.oom.group, 65536, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW )
lxc elasticsearch-container 20240605011615.772 WARN cgfsng - ../src/lxc/cgroups/cgfsng.c:fchowmodat:1611 - No such file or directory - Failed to fchownat(44, memory.reclaim, 65536, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW )
lxc elasticsearch-container 20240605011615.773 INFO start - ../src/lxc/start.c:do_start:1104 - Unshared CLONE_NEWNET
lxc elasticsearch-container 20240605011615.773 NOTICE utils - ../src/lxc/utils.c:lxc_drop_groups:1368 - Dropped supplimentary groups
lxc elasticsearch-container 20240605011615.773 NOTICE utils - ../src/lxc/utils.c:lxc_switch_uid_gid:1344 - Switched to gid 0
lxc elasticsearch-container 20240605011615.773 NOTICE utils - ../src/lxc/utils.c:lxc_switch_uid_gid:1353 - Switched to uid 0
lxc elasticsearch-container 20240605011615.773 INFO start - ../src/lxc/start.c:do_start:1204 - Unshared CLONE_NEWCGROUP
lxc elasticsearch-container 20240605011615.806 INFO conf - ../src/lxc/conf.c:setup_utsname:875 - Set hostname to "elasticsearch-container"
lxc elasticsearch-container 20240605011615.815 INFO network - ../src/lxc/network.c:lxc_setup_network_in_child_namespaces:4019 - Finished setting up network devices with caller assigned names
lxc elasticsearch-container 20240605011615.815 INFO conf - ../src/lxc/conf.c:mount_autodev:1219 - Preparing "/dev"
lxc elasticsearch-container 20240605011615.815 INFO conf - ../src/lxc/conf.c:mount_autodev:1280 - Prepared "/dev"
lxc elasticsearch-container 20240605011615.816 INFO conf - ../src/lxc/conf.c:lxc_fill_autodev:1317 - Populating "/dev"
lxc elasticsearch-container 20240605011615.816 INFO conf - ../src/lxc/conf.c:lxc_fill_autodev:1405 - Populated "/dev"
lxc elasticsearch-container 20240605011615.816 INFO conf - ../src/lxc/conf.c:lxc_transient_proc:3775 - Caller's PID is 1; /proc/self points to 1
lxc elasticsearch-container 20240605011615.816 INFO conf - ../src/lxc/conf.c:lxc_setup_ttys:1072 - Finished setting up 0 /dev/tty<N> device(s)
lxc elasticsearch-container 20240605011615.817 INFO conf - ../src/lxc/conf.c:setup_personality:1917 - Set personality to "0lx0"
lxc elasticsearch-container 20240605011615.817 NOTICE conf - ../src/lxc/conf.c:lxc_setup:4469 - The container "elasticsearch-container" is set up
lxc elasticsearch-container 20240605011615.817 NOTICE start - ../src/lxc/start.c:start:2194 - Exec'ing "/sbin/init"
lxc elasticsearch-container 20240605011615.818 NOTICE start - ../src/lxc/start.c:post_start:2205 - Started "/sbin/init" with pid "2019"
lxc elasticsearch-container 20240605011615.818 NOTICE start - ../src/lxc/start.c:signal_handler:446 - Received 17 from pid 2020 instead of container init 2019
lxc elasticsearch-container 20240605011615.859 INFO error - ../src/lxc/error.c:lxc_error_set_and_log:31 - Child <2019> ended on error (255)
lxc elasticsearch-container 20240605011615.883 INFO conf - ../src/lxc/conf.c:run_script_argv:340 - Executing script "/usr/libexec/incus/incusd callhook /var/lib/incus "default" "elasticsearch-container" stopns" for container "elasticsearch-container"
lxc elasticsearch-container 20240605011615.974 INFO conf - ../src/lxc/conf.c:lxc_map_ids:3603 - Caller maps host root. Writing mapping directly
lxc elasticsearch-container 20240605011615.974 NOTICE utils - ../src/lxc/utils.c:lxc_drop_groups:1368 - Dropped supplimentary groups
lxc elasticsearch-container 20240605011615.993 INFO conf - ../src/lxc/conf.c:run_script_argv:340 - Executing script "/usr/libexec/incus/incusd callhook /var/lib/incus "default" "elasticsearch-container" stop" for container "elasticsearch-container"
Kind of boned right now. All my containers have been converted with lxd-to-incus, just can't get them to start right now, so everything is halted.
Very unfamiliar with this territory @stgraber , any other logs or debug information that might help?
All my infrastructure is kind of stuck right now since it removed LXD already so I am at a stand still on this one :(
Can you do:
incus create images:alpine/edge a1
incus config device add a1 proxy1 proxy connect=tcp:0.0.0.0:9200 listen=tcp:0.0.0.0:1338
incus config device add a1 proxy2 proxy connect=tcp:0.0.0.0:9300 listen=tcp:0.0.0.0:1337
incus start a1
I've tested that here inside of a void container running on my Debian 12 system and that's working just fine, so if that's failing for you, then that would point towards a kernel issue.
incus start a1
[brandon@blackhole ~]$ incus create images:alpine/edge a1 Creating a1 [brandon@blackhole ~]$ incus config device add a1 proxy1 proxy connect=tcp:0.0.0.0:9200 listen=tcp:0.0.0.0:1338 Device proxy1 added to a1 [brandon@blackhole ~]$ incus config device add a1 proxy2 proxy connect=tcp:0.0.0.0:9300 listen=tcp:0.0.0.0:1337 Device proxy2 added to a1 [brandon@blackhole ~]$ incus start a1 [brandon@blackhole ~]$ incus list +-------------------------+---------+-----------------------+------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------------------------+---------+-----------------------+------+-----------+-----------+ | a1 | RUNNING | 10.109.174.173 (eth0) | | CONTAINER | 0 | +-------------------------+---------+-----------------------+------+-----------+-----------+ | elasticsearch-container | STOPPED | | | CONTAINER | 0 | +-------------------------+---------+-----------------------+------+-----------+-----------+
It looks like that ran no problem.
side note: I still have my storage pool for the elastic container....so my data is ok i hope
+--------------------+--------+-------------------------------------------------+-------------+---------+---------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE |
+--------------------+--------+-------------------------------------------------+-------------+---------+---------+
| default | dir | /var/lib/incus/storage-pools/default | | 2 | CREATED |
+--------------------+--------+-------------------------------------------------+-------------+---------+---------+
| elasticsearch-pool | dir | /var/lib/incus/storage-pools/elasticsearch-pool | | 1 | CREATED |
+--------------------+--------+-------------------------------------------------+-------------+---------+---------+
| test-pool | dir | /var/lib/incus/storage-pools/test-pool | | 0 | CREATED |
+--------------------+--------+-------------------------------------------------+-------------+---------+---------+
Thats so odd though. So what do you think, is there a better solution for the elasticcontainer?
Any way I can maybe clone the container and add the port forwards to the new cloned container maybe?
Can you try starting your container without those two devices, see if it starts up fine then or if it hits another problem?
Can you try starting your container without those two devices, see if it starts up fine then or if it hits another problem?
It ran without an error but the container is showing as STOPPED still
[brandon@blackhole ~]$ incus start elasticsearch-container
[brandon@blackhole ~]$ incus list
+-------------------------+---------+------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------------------+---------+------+------+-----------+-----------+
| a1 | STOPPED | | | CONTAINER | 0 |
+-------------------------+---------+------+------+-----------+-----------+
| elasticsearch-container | STOPPED | | | CONTAINER | 0 |
+-------------------------+---------+------+------+-----------+-----------+
[brandon@blackhole root]$ incus config show elasticsearch-container
architecture: x86_64
config:
boot.autostart: "true"
image.architecture: amd64
image.description: Debian bookworm amd64 (20240228_05:24)
image.os: Debian
image.release: bookworm
image.serial: "20240228_05:24"
image.type: squashfs
image.variant: default
limits.kernel.memlock: "9223372036854775807"
limits.kernel.nofile: "65535"
volatile.base_image: b9a12bf99efdac578271b4a3e616e8cd3dec33faa2baff7923d2d6ca79ed8993
volatile.cloud-init.instance-id: c6a9f533-a1de-4f56-a66f-a62336684579
volatile.eth0.hwaddr: 00:16:3e:8e:df:93
volatile.idmap.base: "0"
volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
volatile.last_state.idmap: '[]'
volatile.last_state.power: STOPPED
volatile.last_state.ready: "false"
volatile.uuid: 3095c9e3-3c33-4291-bf4e-1bbab4156e22
volatile.uuid.generation: 3095c9e3-3c33-4291-bf4e-1bbab4156e22
devices:
eth0:
ipv4.address: 10.109.174.61
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: elasticsearch-pool
type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
[brandon@blackhole root]$ incus info --show-log elasticsearch-container
Name: elasticsearch-container
Status: STOPPED
Type: container
Architecture: x86_64
Created: 2024/02/29 15:07 EST
Last Used: 2024/06/06 10:48 EDT
Log:
lxc elasticsearch-container 20240606144807.980 INFO lxccontainer - ../src/lxc/lxccontainer.c:do_lxcapi_start:997 - Set process title to [lxc monitor] /var/lib/incus/containers elasticsearch-container
lxc elasticsearch-container 20240606144807.981 INFO start - ../src/lxc/start.c:lxc_check_inherited:325 - Closed inherited fd 4
lxc elasticsearch-container 20240606144807.981 INFO start - ../src/lxc/start.c:lxc_check_inherited:325 - Closed inherited fd 5
lxc elasticsearch-container 20240606144807.981 INFO start - ../src/lxc/start.c:lxc_check_inherited:325 - Closed inherited fd 6
lxc elasticsearch-container 20240606144807.981 INFO start - ../src/lxc/start.c:lxc_check_inherited:325 - Closed inherited fd 16
lxc elasticsearch-container 20240606144807.981 INFO lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver nop
lxc elasticsearch-container 20240606144807.981 INFO conf - ../src/lxc/conf.c:run_script_argv:340 - Executing script "/proc/1021/exe callhook /var/lib/incus "default" "elasticsearch-container" start" for container "elasticsearch-container"
lxc elasticsearch-container 20240606144808.220 INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "[all]"
lxc elasticsearch-container 20240606144808.221 INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "reject_force_umount # comment this to allow umount -f; not recommended"
lxc elasticsearch-container 20240606144808.221 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
lxc elasticsearch-container 20240606144808.221 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
lxc elasticsearch-container 20240606144808.221 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
lxc elasticsearch-container 20240606144808.221 INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "[all]"
lxc elasticsearch-container 20240606144808.222 INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "kexec_load errno 38"
lxc elasticsearch-container 20240606144808.222 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[246:kexec_load] action[327718:errno] arch[0]
lxc elasticsearch-container 20240606144808.222 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[246:kexec_load] action[327718:errno] arch[1073741827]
lxc elasticsearch-container 20240606144808.222 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[246:kexec_load] action[327718:errno] arch[1073741886]
lxc elasticsearch-container 20240606144808.222 INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "open_by_handle_at errno 38"
lxc elasticsearch-container 20240606144808.222 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[304:open_by_handle_at] action[327718:errno] arch[0]
lxc elasticsearch-container 20240606144808.222 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[304:open_by_handle_at] action[327718:errno] arch[1073741827]
lxc elasticsearch-container 20240606144808.222 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[304:open_by_handle_at] action[327718:errno] arch[1073741886]
lxc elasticsearch-container 20240606144808.223 INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "init_module errno 38"
lxc elasticsearch-container 20240606144808.223 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[175:init_module] action[327718:errno] arch[0]
lxc elasticsearch-container 20240606144808.223 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[175:init_module] action[327718:errno] arch[1073741827]
lxc elasticsearch-container 20240606144808.223 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[175:init_module] action[327718:errno] arch[1073741886]
lxc elasticsearch-container 20240606144808.223 INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "finit_module errno 38"
lxc elasticsearch-container 20240606144808.223 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[313:finit_module] action[327718:errno] arch[0]
lxc elasticsearch-container 20240606144808.223 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[313:finit_module] action[327718:errno] arch[1073741827]
lxc elasticsearch-container 20240606144808.224 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[313:finit_module] action[327718:errno] arch[1073741886]
lxc elasticsearch-container 20240606144808.224 INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "delete_module errno 38"
lxc elasticsearch-container 20240606144808.224 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[176:delete_module] action[327718:errno] arch[0]
lxc elasticsearch-container 20240606144808.224 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[176:delete_module] action[327718:errno] arch[1073741827]
lxc elasticsearch-container 20240606144808.224 INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[176:delete_module] action[327718:errno] arch[1073741886]
lxc elasticsearch-container 20240606144808.224 INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:1017 - Merging compat seccomp contexts into main context
lxc elasticsearch-container 20240606144808.224 INFO start - ../src/lxc/start.c:lxc_init:881 - Container "elasticsearch-container" is initialized
lxc elasticsearch-container 20240606144808.231 INFO cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_monitor_create:1383 - The monitor process uses "lxc.monitor.elasticsearch-container" as cgroup
lxc elasticsearch-container 20240606144808.345 INFO cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_payload_create:1491 - The container process uses "lxc.payload.elasticsearch-container" as inner and "lxc.payload.elasticsearch-container" as limit cgroup
lxc elasticsearch-container 20240606144808.352 INFO start - ../src/lxc/start.c:lxc_spawn:1762 - Cloned CLONE_NEWUSER
lxc elasticsearch-container 20240606144808.352 INFO start - ../src/lxc/start.c:lxc_spawn:1762 - Cloned CLONE_NEWNS
lxc elasticsearch-container 20240606144808.352 INFO start - ../src/lxc/start.c:lxc_spawn:1762 - Cloned CLONE_NEWPID
lxc elasticsearch-container 20240606144808.352 INFO start - ../src/lxc/start.c:lxc_spawn:1762 - Cloned CLONE_NEWUTS
lxc elasticsearch-container 20240606144808.353 INFO start - ../src/lxc/start.c:lxc_spawn:1762 - Cloned CLONE_NEWIPC
lxc elasticsearch-container 20240606144808.415 INFO conf - ../src/lxc/conf.c:lxc_map_ids:3603 - Caller maps host root. Writing mapping directly
lxc elasticsearch-container 20240606144808.416 NOTICE utils - ../src/lxc/utils.c:lxc_drop_groups:1368 - Dropped supplimentary groups
lxc elasticsearch-container 20240606144808.420 WARN cgfsng - ../src/lxc/cgroups/cgfsng.c:fchowmodat:1611 - No such file or directory - Failed to fchownat(44, memory.oom.group, 65536, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW )
lxc elasticsearch-container 20240606144808.420 WARN cgfsng - ../src/lxc/cgroups/cgfsng.c:fchowmodat:1611 - No such file or directory - Failed to fchownat(44, memory.reclaim, 65536, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW )
lxc elasticsearch-container 20240606144808.433 INFO start - ../src/lxc/start.c:do_start:1104 - Unshared CLONE_NEWNET
lxc elasticsearch-container 20240606144808.434 NOTICE utils - ../src/lxc/utils.c:lxc_drop_groups:1368 - Dropped supplimentary groups
lxc elasticsearch-container 20240606144808.434 NOTICE utils - ../src/lxc/utils.c:lxc_switch_uid_gid:1344 - Switched to gid 0
lxc elasticsearch-container 20240606144808.434 NOTICE utils - ../src/lxc/utils.c:lxc_switch_uid_gid:1353 - Switched to uid 0
lxc elasticsearch-container 20240606144808.435 INFO start - ../src/lxc/start.c:do_start:1204 - Unshared CLONE_NEWCGROUP
lxc elasticsearch-container 20240606144808.666 INFO conf - ../src/lxc/conf.c:setup_utsname:875 - Set hostname to "elasticsearch-container"
lxc elasticsearch-container 20240606144808.795 INFO network - ../src/lxc/network.c:lxc_setup_network_in_child_namespaces:4019 - Finished setting up network devices with caller assigned names
lxc elasticsearch-container 20240606144808.795 INFO conf - ../src/lxc/conf.c:mount_autodev:1219 - Preparing "/dev"
lxc elasticsearch-container 20240606144808.796 INFO conf - ../src/lxc/conf.c:mount_autodev:1280 - Prepared "/dev"
lxc elasticsearch-container 20240606144808.807 INFO conf - ../src/lxc/conf.c:lxc_fill_autodev:1317 - Populating "/dev"
lxc elasticsearch-container 20240606144808.809 INFO conf - ../src/lxc/conf.c:lxc_fill_autodev:1405 - Populated "/dev"
lxc elasticsearch-container 20240606144808.809 INFO conf - ../src/lxc/conf.c:lxc_transient_proc:3775 - Caller's PID is 1; /proc/self points to 1
lxc elasticsearch-container 20240606144808.811 INFO conf - ../src/lxc/conf.c:lxc_setup_ttys:1072 - Finished setting up 0 /dev/tty<N> device(s)
lxc elasticsearch-container 20240606144808.815 INFO conf - ../src/lxc/conf.c:setup_personality:1917 - Set personality to "0lx0"
lxc elasticsearch-container 20240606144808.816 NOTICE conf - ../src/lxc/conf.c:lxc_setup:4469 - The container "elasticsearch-container" is set up
lxc elasticsearch-container 20240606144808.821 NOTICE start - ../src/lxc/start.c:start:2194 - Exec'ing "/sbin/init"
lxc elasticsearch-container 20240606144808.827 NOTICE start - ../src/lxc/start.c:post_start:2205 - Started "/sbin/init" with pid "2506"
lxc elasticsearch-container 20240606144808.828 NOTICE start - ../src/lxc/start.c:signal_handler:446 - Received 17 from pid 2507 instead of container init 2506
lxc elasticsearch-container 20240606144808.115 INFO error - ../src/lxc/error.c:lxc_error_set_and_log:31 - Child <2506> ended on error (255)
lxc elasticsearch-container 20240606144808.136 INFO conf - ../src/lxc/conf.c:run_script_argv:340 - Executing script "/usr/libexec/incus/incusd callhook /var/lib/incus "default" "elasticsearch-container" stopns" for container "elasticsearch-container"
lxc elasticsearch-container 20240606144808.212 INFO conf - ../src/lxc/conf.c:lxc_map_ids:3603 - Caller maps host root. Writing mapping directly
lxc elasticsearch-container 20240606144808.212 NOTICE utils - ../src/lxc/utils.c:lxc_drop_groups:1368 - Dropped supplimentary groups
lxc elasticsearch-container 20240606144808.231 INFO conf - ../src/lxc/conf.c:run_script_argv:340 - Executing script "/usr/libexec/incus/incusd callhook /var/lib/incus "default" "elasticsearch-container" stop" for container "elasticsearch-container"
incus start elasticsearch-container --debug
DEBUG [2024-06-06T10:54:27-04:00] Got response struct from Incus
DEBUG [2024-06-06T10:54:27-04:00]
{
"config": {
"images.auto_update_interval": "0"
},
"api_extensions": [
"storage_zfs_remove_snapshots",
"container_host_shutdown_timeout",
"container_stop_priority",
"container_syscall_filtering",
"auth_pki",
"container_last_used_at",
"etag",
"patch",
"usb_devices",
"https_allowed_credentials",
"image_compression_algorithm",
"directory_manipulation",
"container_cpu_time",
"storage_zfs_use_refquota",
"storage_lvm_mount_options",
"network",
"profile_usedby",
"container_push",
"container_exec_recording",
"certificate_update",
"container_exec_signal_handling",
"gpu_devices",
"container_image_properties",
"migration_progress",
"id_map",
"network_firewall_filtering",
"network_routes",
"storage",
"file_delete",
"file_append",
"network_dhcp_expiry",
"storage_lvm_vg_rename",
"storage_lvm_thinpool_rename",
"network_vlan",
"image_create_aliases",
"container_stateless_copy",
"container_only_migration",
"storage_zfs_clone_copy",
"unix_device_rename",
"storage_lvm_use_thinpool",
"storage_rsync_bwlimit",
"network_vxlan_interface",
"storage_btrfs_mount_options",
"entity_description",
"image_force_refresh",
"storage_lvm_lv_resizing",
"id_map_base",
"file_symlinks",
"container_push_target",
"network_vlan_physical",
"storage_images_delete",
"container_edit_metadata",
"container_snapshot_stateful_migration",
"storage_driver_ceph",
"storage_ceph_user_name",
"resource_limits",
"storage_volatile_initial_source",
"storage_ceph_force_osd_reuse",
"storage_block_filesystem_btrfs",
"resources",
"kernel_limits",
"storage_api_volume_rename",
"network_sriov",
"console",
"restrict_dev_incus",
"migration_pre_copy",
"infiniband",
"dev_incus_events",
"proxy",
"network_dhcp_gateway",
"file_get_symlink",
"network_leases",
"unix_device_hotplug",
"storage_api_local_volume_handling",
"operation_description",
"clustering",
"event_lifecycle",
"storage_api_remote_volume_handling",
"nvidia_runtime",
"container_mount_propagation",
"container_backup",
"dev_incus_images",
"container_local_cross_pool_handling",
"proxy_unix",
"proxy_udp",
"clustering_join",
"proxy_tcp_udp_multi_port_handling",
"network_state",
"proxy_unix_dac_properties",
"container_protection_delete",
"unix_priv_drop",
"pprof_http",
"proxy_haproxy_protocol",
"network_hwaddr",
"proxy_nat",
"network_nat_order",
"container_full",
"backup_compression",
"nvidia_runtime_config",
"storage_api_volume_snapshots",
"storage_unmapped",
"projects",
"network_vxlan_ttl",
"container_incremental_copy",
"usb_optional_vendorid",
"snapshot_scheduling",
"snapshot_schedule_aliases",
"container_copy_project",
"clustering_server_address",
"clustering_image_replication",
"container_protection_shift",
"snapshot_expiry",
"container_backup_override_pool",
"snapshot_expiry_creation",
"network_leases_location",
"resources_cpu_socket",
"resources_gpu",
"resources_numa",
"kernel_features",
"id_map_current",
"event_location",
"storage_api_remote_volume_snapshots",
"network_nat_address",
"container_nic_routes",
"cluster_internal_copy",
"seccomp_notify",
"lxc_features",
"container_nic_ipvlan",
"network_vlan_sriov",
"storage_cephfs",
"container_nic_ipfilter",
"resources_v2",
"container_exec_user_group_cwd",
"container_syscall_intercept",
"container_disk_shift",
"storage_shifted",
"resources_infiniband",
"daemon_storage",
"instances",
"image_types",
"resources_disk_sata",
"clustering_roles",
"images_expiry",
"resources_network_firmware",
"backup_compression_algorithm",
"ceph_data_pool_name",
"container_syscall_intercept_mount",
"compression_squashfs",
"container_raw_mount",
"container_nic_routed",
"container_syscall_intercept_mount_fuse",
"container_disk_ceph",
"virtual-machines",
"image_profiles",
"clustering_architecture",
"resources_disk_id",
"storage_lvm_stripes",
"vm_boot_priority",
"unix_hotplug_devices",
"api_filtering",
"instance_nic_network",
"clustering_sizing",
"firewall_driver",
"projects_limits",
"container_syscall_intercept_hugetlbfs",
"limits_hugepages",
"container_nic_routed_gateway",
"projects_restrictions",
"custom_volume_snapshot_expiry",
"volume_snapshot_scheduling",
"trust_ca_certificates",
"snapshot_disk_usage",
"clustering_edit_roles",
"container_nic_routed_host_address",
"container_nic_ipvlan_gateway",
"resources_usb_pci",
"resources_cpu_threads_numa",
"resources_cpu_core_die",
"api_os",
"container_nic_routed_host_table",
"container_nic_ipvlan_host_table",
"container_nic_ipvlan_mode",
"resources_system",
"images_push_relay",
"network_dns_search",
"container_nic_routed_limits",
"instance_nic_bridged_vlan",
"network_state_bond_bridge",
"usedby_consistency",
"custom_block_volumes",
"clustering_failure_domains",
"resources_gpu_mdev",
"console_vga_type",
"projects_limits_disk",
"network_type_macvlan",
"network_type_sriov",
"container_syscall_intercept_bpf_devices",
"network_type_ovn",
"projects_networks",
"projects_networks_restricted_uplinks",
"custom_volume_backup",
"backup_override_name",
"storage_rsync_compression",
"network_type_physical",
"network_ovn_external_subnets",
"network_ovn_nat",
"network_ovn_external_routes_remove",
"tpm_device_type",
"storage_zfs_clone_copy_rebase",
"gpu_mdev",
"resources_pci_iommu",
"resources_network_usb",
"resources_disk_address",
"network_physical_ovn_ingress_mode",
"network_ovn_dhcp",
"network_physical_routes_anycast",
"projects_limits_instances",
"network_state_vlan",
"instance_nic_bridged_port_isolation",
"instance_bulk_state_change",
"network_gvrp",
"instance_pool_move",
"gpu_sriov",
"pci_device_type",
"storage_volume_state",
"network_acl",
"migration_stateful",
"disk_state_quota",
"storage_ceph_features",
"projects_compression",
"projects_images_remote_cache_expiry",
"certificate_project",
"network_ovn_acl",
"projects_images_auto_update",
"projects_restricted_cluster_target",
"images_default_architecture",
"network_ovn_acl_defaults",
"gpu_mig",
"project_usage",
"network_bridge_acl",
"warnings",
"projects_restricted_backups_and_snapshots",
"clustering_join_token",
"clustering_description",
"server_trusted_proxy",
"clustering_update_cert",
"storage_api_project",
"server_instance_driver_operational",
"server_supported_storage_drivers",
"event_lifecycle_requestor_address",
"resources_gpu_usb",
"clustering_evacuation",
"network_ovn_nat_address",
"network_bgp",
"network_forward",
"custom_volume_refresh",
"network_counters_errors_dropped",
"metrics",
"image_source_project",
"clustering_config",
"network_peer",
"linux_sysctl",
"network_dns",
"ovn_nic_acceleration",
"certificate_self_renewal",
"instance_project_move",
"storage_volume_project_move",
"cloud_init",
"network_dns_nat",
"database_leader",
"instance_all_projects",
"clustering_groups",
"ceph_rbd_du",
"instance_get_full",
"qemu_metrics",
"gpu_mig_uuid",
"event_project",
"clustering_evacuation_live",
"instance_allow_inconsistent_copy",
"network_state_ovn",
"storage_volume_api_filtering",
"image_restrictions",
"storage_zfs_export",
"network_dns_records",
"storage_zfs_reserve_space",
"network_acl_log",
"storage_zfs_blocksize",
"metrics_cpu_seconds",
"instance_snapshot_never",
"certificate_token",
"instance_nic_routed_neighbor_probe",
"event_hub",
"agent_nic_config",
"projects_restricted_intercept",
"metrics_authentication",
"images_target_project",
"images_all_projects",
"cluster_migration_inconsistent_copy",
"cluster_ovn_chassis",
"container_syscall_intercept_sched_setscheduler",
"storage_lvm_thinpool_metadata_size",
"storage_volume_state_total",
"instance_file_head",
"instances_nic_host_name",
"image_copy_profile",
"container_syscall_intercept_sysinfo",
"clustering_evacuation_mode",
"resources_pci_vpd",
"qemu_raw_conf",
"storage_cephfs_fscache",
"network_load_balancer",
"vsock_api",
"instance_ready_state",
"network_bgp_holdtime",
"storage_volumes_all_projects",
"metrics_memory_oom_total",
"storage_buckets",
"storage_buckets_create_credentials",
"metrics_cpu_effective_total",
"projects_networks_restricted_access",
"storage_buckets_local",
"loki",
"acme",
"internal_metrics",
"cluster_join_token_expiry",
"remote_token_expiry",
"init_preseed",
"storage_volumes_created_at",
"cpu_hotplug",
"projects_networks_zones",
"network_txqueuelen",
"cluster_member_state",
"instances_placement_scriptlet",
"storage_pool_source_wipe",
"zfs_block_mode",
"instance_generation_id",
"disk_io_cache",
"amd_sev",
"storage_pool_loop_resize",
"migration_vm_live",
"ovn_nic_nesting",
"oidc",
"network_ovn_l3only",
"ovn_nic_acceleration_vdpa",
"cluster_healing",
"instances_state_total",
"auth_user",
"security_csm",
"instances_rebuild",
"numa_cpu_placement",
"custom_volume_iso",
"network_allocations",
"zfs_delegate",
"storage_api_remote_volume_snapshot_copy",
"operations_get_query_all_projects",
"metadata_configuration",
"syslog_socket",
"event_lifecycle_name_and_project",
"instances_nic_limits_priority",
"disk_initial_volume_configuration",
"operation_wait",
"image_restriction_privileged",
"cluster_internal_custom_volume_copy",
"disk_io_bus",
"storage_cephfs_create_missing",
"instance_move_config",
"ovn_ssl_config",
"certificate_description",
"disk_io_bus_virtio_blk",
"loki_config_instance",
"instance_create_start",
"clustering_evacuation_stop_options",
"boot_host_shutdown_action",
"agent_config_drive",
"network_state_ovn_lr",
"image_template_permissions",
"storage_bucket_backup",
"storage_lvm_cluster",
"shared_custom_block_volumes"
],
"api_status": "stable",
"api_version": "1.0",
"auth": "trusted",
"public": false,
"auth_methods": [
"tls"
],
"auth_user_name": "services",
"auth_user_method": "unix",
"environment": {
"addresses": [],
"architectures": [
"x86_64",
"i686"
],
"certificate": "-----BEGIN CERTIFICATE-----\nMIIB4zCCAWqgAwIBAgIRAK5JtzQwkJqFFBSGu4RaoYwwCgYIKoZIzj0EAwMwJDEM\nMAoGA1UEChMDTFhEMRQwEgYDVQQDDAtyb290QHIzMjAtMjAeFw0yNDAyMjkxMzIx\nMTBaFw0zNDAyMjYxMzIxMTBaMCQxDDAKBgNVBAoTA0xYRDEUMBIGA1UEAwwLcm9v\ndEByMzIwLTIwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAATQIbAfb6T1lur4ypYJOh0i\n3A2XK9re/UawL91HDaSry3ADdF+jE/JyV+N6UX2oAI6AQ+fTVF9/bEkanj/xWUNn\nkXDHrITQBxdJkkwe8krO+DanuSI8x1jfAcFqSyzu1SujYDBeMA4GA1UdDwEB/wQE\nAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMCkGA1UdEQQi\nMCCCBnIzMjAtMocEfwAAAYcQAAAAAAAAAAAAAAAAAAAAATAKBggqhkjOPQQDAwNn\nADBkAjA66SMctgTccF0DOo4xMUATOEFyJ6Exfy02QnzK3LX7gpBs2KABJGGS4qnm\n+cGAfAgCMBOxagTD/XNDsnMfTeCW3XWJsNGhg3VzfXBls+Jd03BxOqcvNcwmOEFs\n85AG5IVAyA==\n-----END CERTIFICATE-----\n",
"certificate_fingerprint": "7e12950a22971a5acc3ce2808c76ed6102ca89fa7a084338aaaeb3c159f64955",
"driver": "lxc",
"driver_version": "5.0.1",
"firewall": "xtables",
"kernel": "Linux",
"kernel_architecture": "x86_64",
"kernel_features": {
"idmapped_mounts": "true",
"netnsid_getifaddrs": "true",
"seccomp_listener": "true",
"seccomp_listener_continue": "true",
"uevent_injection": "true",
"unpriv_binfmt": "false",
"unpriv_fscaps": "true"
},
"kernel_version": "6.6.32_1",
"lxc_features": {
"cgroup2": "true",
"core_scheduling": "true",
"devpts_fd": "true",
"idmapped_mounts_v2": "true",
"mount_injection_file": "true",
"network_gateway_device_route": "true",
"network_ipvlan": "true",
"network_l2proxy": "true",
"network_phys_macvlan_mtu": "true",
"network_veth_router": "true",
"pidfd": "true",
"seccomp_allow_deny_syntax": "true",
"seccomp_notify": "true",
"seccomp_proxy_send_notify_fd": "true"
},
"os_name": "Void",
"os_version": "",
"project": "default",
"server": "incus",
"server_clustered": false,
"server_event_mode": "full-mesh",
"server_name": "r320-2",
"server_pid": 1021,
"server_version": "0.6",
"storage": "dir",
"storage_version": "1",
"storage_supported_drivers": [
{
"Name": "dir",
"Version": "1",
"Remote": false
},
{
"Name": "zfs",
"Version": "2.2.4-1",
"Remote": false
},
{
"Name": "btrfs",
"Version": "6.5.1",
"Remote": false
}
]
}
}
DEBUG [2024-06-06T10:54:27-04:00] Sending request to Incus etag= method=GET url="http://unix.socket/1.0/instances/elasticsearch-container"
DEBUG [2024-06-06T10:54:27-04:00] Got response struct from Incus
DEBUG [2024-06-06T10:54:27-04:00]
{
"architecture": "x86_64",
"config": {
"boot.autostart": "true",
"image.architecture": "amd64",
"image.description": "Debian bookworm amd64 (20240228_05:24)",
"image.os": "Debian",
"image.release": "bookworm",
"image.serial": "20240228_05:24",
"image.type": "squashfs",
"image.variant": "default",
"limits.kernel.memlock": "9223372036854775807",
"limits.kernel.nofile": "65535",
"volatile.base_image": "b9a12bf99efdac578271b4a3e616e8cd3dec33faa2baff7923d2d6ca79ed8993",
"volatile.cloud-init.instance-id": "c6a9f533-a1de-4f56-a66f-a62336684579",
"volatile.eth0.hwaddr": "00:16:3e:8e:df:93",
"volatile.idmap.base": "0",
"volatile.idmap.current": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":65536}]",
"volatile.idmap.next": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":65536}]",
"volatile.last_state.idmap": "[]",
"volatile.last_state.power": "STOPPED",
"volatile.last_state.ready": "false",
"volatile.uuid": "3095c9e3-3c33-4291-bf4e-1bbab4156e22",
"volatile.uuid.generation": "3095c9e3-3c33-4291-bf4e-1bbab4156e22"
},
"devices": {
"eth0": {
"ipv4.address": "10.109.174.61",
"name": "eth0",
"network": "lxdbr0",
"type": "nic"
},
"root": {
"path": "/",
"pool": "elasticsearch-pool",
"type": "disk"
}
},
"ephemeral": false,
"profiles": [
"default"
],
"stateful": false,
"description": "",
"created_at": "2024-02-29T20:07:55.819322521Z",
"expanded_config": {
"boot.autostart": "true",
"image.architecture": "amd64",
"image.description": "Debian bookworm amd64 (20240228_05:24)",
"image.os": "Debian",
"image.release": "bookworm",
"image.serial": "20240228_05:24",
"image.type": "squashfs",
"image.variant": "default",
"limits.kernel.memlock": "9223372036854775807",
"limits.kernel.nofile": "65535",
"volatile.base_image": "b9a12bf99efdac578271b4a3e616e8cd3dec33faa2baff7923d2d6ca79ed8993",
"volatile.cloud-init.instance-id": "c6a9f533-a1de-4f56-a66f-a62336684579",
"volatile.eth0.hwaddr": "00:16:3e:8e:df:93",
"volatile.idmap.base": "0",
"volatile.idmap.current": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":65536}]",
"volatile.idmap.next": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":65536}]",
"volatile.last_state.idmap": "[]",
"volatile.last_state.power": "STOPPED",
"volatile.last_state.ready": "false",
"volatile.uuid": "3095c9e3-3c33-4291-bf4e-1bbab4156e22",
"volatile.uuid.generation": "3095c9e3-3c33-4291-bf4e-1bbab4156e22"
},
"expanded_devices": {
"eth0": {
"ipv4.address": "10.109.174.61",
"name": "eth0",
"network": "lxdbr0",
"type": "nic"
},
"root": {
"path": "/",
"pool": "elasticsearch-pool",
"type": "disk"
}
},
"name": "elasticsearch-container",
"status": "Stopped",
"status_code": 102,
"last_used_at": "2024-06-06T14:54:15.556737039Z",
"location": "none",
"type": "container",
"project": "default"
}
DEBUG [2024-06-06T10:54:27-04:00] Connected to the websocket: ws://unix.socket/1.0/events
DEBUG [2024-06-06T10:54:27-04:00] Sending request to Incus etag= method=PUT url="http://unix.socket/1.0/instances/elasticsearch-container/state"
DEBUG [2024-06-06T10:54:27-04:00]
{
"action": "start",
"timeout": 0,
"force": false,
"stateful": false
}
DEBUG [2024-06-06T10:54:27-04:00] Got operation from Incus
DEBUG [2024-06-06T10:54:27-04:00]
{
"id": "bb8b1364-a12c-4368-a912-e9d349c8511e",
"class": "task",
"description": "Starting instance",
"created_at": "2024-06-06T10:54:27.142806963-04:00",
"updated_at": "2024-06-06T10:54:27.142806963-04:00",
"status": "Running",
"status_code": 103,
"resources": {
"instances": [
"/1.0/instances/elasticsearch-container"
]
},
"metadata": null,
"may_cancel": false,
"err": "",
"location": "none"
}
DEBUG [2024-06-06T10:54:27-04:00] Sending request to Incus etag= method=GET url="http://unix.socket/1.0/operations/bb8b1364-a12c-4368-a912-e9d349c8511e"
DEBUG [2024-06-06T10:54:27-04:00] Got response struct from Incus
DEBUG [2024-06-06T10:54:27-04:00]
{
"id": "bb8b1364-a12c-4368-a912-e9d349c8511e",
"class": "task",
"description": "Starting instance",
"created_at": "2024-06-06T10:54:27.142806963-04:00",
"updated_at": "2024-06-06T10:54:27.142806963-04:00",
"status": "Running",
"status_code": 103,
"resources": {
"instances": [
"/1.0/instances/elasticsearch-container"
]
},
"metadata": null,
"may_cancel": false,
"err": "",
"location": "none"
}
incus console elasticsearch-container --show-log
may be useful.
I suspect the container instantly dying on startup is actually the root cause of your problems as that would cause forkproxy
(proxy devices) to attempt to connect to a container that died a few milliseconds beforehand, resulting in that error.
incus console elasticsearch-container --show-log
[brandon@blackhole ~]$ incus console elasticsearch-container --show-log
Console log:
Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted
[!!!!!!] Failed to mount API filesystems.
Exiting PID 1...
Weird, it's trying to look for a systemd mount...Void does not use systemd
Cheers though...seems like we are getting somewhere on this....though this error message seems oddly familiar https://github.com/lxc/lxc/issues/4072
I am wondering, do you think its a cgroups issue with a difference in the host / container cgroup? I do see some solutions but they seem to be for systemd related systems
Yeah, void not using systemd is likely to be the issue because that means that the required systemd cgroup wouldn't exist.
Can you show:
There are different ways around this one but it depends on what void may already have set up.
[brandon@blackhole ~]$ grep cgroup /proc/self/mounts
cgroup /sys/fs/cgroup tmpfs rw,relatime,mode=755,inode64 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/cpu cgroup rw,relatime,cpu 0 0
cgroup /sys/fs/cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,relatime,blkio 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,relatime,memory 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,relatime,devices 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,relatime,freezer 0 0
cgroup /sys/fs/cgroup/net_cls cgroup rw,relatime,net_cls 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/net_prio cgroup rw,relatime,net_prio 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,relatime,hugetlb 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,relatime,pids 0 0
cgroup /sys/fs/cgroup/rdma cgroup rw,relatime,rdma 0 0
cgroup /sys/fs/cgroup/misc cgroup rw,relatime,misc 0 0
cgroup2 /sys/fs/cgroup/unified cgroup2 rw,relatime,nsdelegate 0 0
[brandon@blackhole ~]$ cat /proc/self/cgroup
14:misc:/
13:rdma:/
12:pids:/
11:hugetlb:/
10:net_prio:/
9:perf_event:/
8:net_cls:/
7:freezer:/
6:devices:/
5:memory:/
4:blkio:/
3:cpuacct:/
2:cpu:/
1:cpuset:/
0::/
Ah, so hybrid v1 and v2, that's getting pretty unusual these days... Try:
mkdir /sys/fs/cgroup/systemd
mount -t cgroup -o none,name=systemd systemd /sys/fs/cgroup/systemd
Ah, so hybrid v1 and v2, that's getting pretty unusual these days... Try:
mkdir /sys/fs/cgroup/systemd mount -t cgroup -o none,name=systemd systemd /sys/fs/cgroup/systemd
CHEERS. Holy crap that was a nightmare lol.
@stgraber I will say it again, you are one of the most helpful & reactive developers I know of. Thank you bredda.
in case it helps, void specific stuff in usually in readme.void https://raw.githubusercontent.com/void-linux/void-packages/master/srcpkgs/incus/files/README.voidlinux in particular, see:
Some container configurations may require that the
CGROUP_MODE
variable in/etc/rc.conf
be set tounified
.
incus info
andlxc info
both working, didn't initialize incus though (as per the documentation)