Closed Piotr1215 closed 2 months ago
That's normal. This adds a network interface but OCI containers unlike system containers do not have a network management daemon that can react to a new network interface and configure it.
I thought that it actually works, it worked on a video here:
https://youtu.be/HiJlS7QHrYI?t=658
Maybe this is due to some additional configuration.
It didn't work above because you did "launch, add, list" so the network interface was hot plugged into a running instance and so didn't get configured. If you just launch with a network interface already configured, then it will work fine.
Similarly, just running incus restart
on your other instance would have fixed it.
Thank you for taking the time to help me. I must be doing something wrong, because no matter how I approach it, system containers or oci containers don't get ipv4 assigned, but get ipv6 assigned. The IPv4 is assigned correctly to both system container and oci app:
➜ incus exec dev-container -- ip a show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:21:b5:6b brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.206.212.100/24 metric 1024 brd 10.206.212.255 scope global dynamic eth0
valid_lft 3183sec preferred_lft 3183sec
inet6 fd42:2624:e2f1:4a4a:216:3eff:fe21:b56b/64 scope global mngtmpaddr noprefixroute
valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:fe21:b56b/64 scope link
valid_lft forever preferred_lft forever
➜ incus exec green -- ip a show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether 00:16:3e:ef:9b:e2 brd ff:ff:ff:ff:ff:ff
inet 10.206.212.174/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fd42:2624:e2f1:4a4a:216:3eff:feef:9be2/64 scope global dynamic flags 100
valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:feef:9be2/64 scope link
valid_lft forever preferred_lft forever
Only vm has ipv4 addresses assigned, but only when I attach console to it.
I've stumbled upon an issue where a specific kernel version was affecting the IPv4 addresses display, maybe this is what is happening here?
➜ incus list
+---------------+---------+-------------------------+-----------------------------------------------+-----------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+---------------+---------+-------------------------+-----------------------------------------------+-----------------+-----------+
| dev-container | RUNNING | | fd42:2624:e2f1:4a4a:216:3eff:fe21:b56b (eth0) | CONTAINER | 0 |
+---------------+---------+-------------------------+-----------------------------------------------+-----------------+-----------+
| green | RUNNING | | fd42:2624:e2f1:4a4a:216:3eff:feef:9be2 (eth0) | CONTAINER (APP) | 0 |
+---------------+---------+-------------------------+-----------------------------------------------+-----------------+-----------+
| ubuntu | RUNNING | 172.17.0.1 (docker0) | | VIRTUAL-MACHINE | 5 |
| | | 10.206.212.178 (enp5s0) | | | |
+---------------+---------+-------------------------+-----------------------------------------------+-----------------+-----------+
When only being IPv6, the culprit is almost always a firewall, whether it's your distribution's use of firewalld/ufw or a Docker on your system blocking every other platform from accessing the network.
https://linuxcontainers.org/incus/docs/main/howto/network_bridge_firewalld/
Thank you! After modifying the ufw
rules the IPv4 gets assigned to the virtual-machine
on start and showed in the output of incus list
.
However, the system container and oci still only have IPv6 addresses and not IPv4.
┌Every───┐┌Command──────────────────────────────────────────────────────────────────────────────────────────────────────────┐┌Time───────────────┐
│2s ││incus list ││2024-08-18 17:43:50│
└────────┘└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘└───────────────────┘
+---------------+---------+------------------------------+-------------------------------------------------+-----------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+---------------+---------+------------------------------+-------------------------------------------------+-----------------+-----------+
| dev-container | RUNNING | | fd42:2624:e2f1:4a4a:216:3eff:fe21:b56b (eth0) | CONTAINER | 0 |
+---------------+---------+------------------------------+-------------------------------------------------+-----------------+-----------+
| green | RUNNING | | fd42:2624:e2f1:4a4a:216:3eff:fe4b:2bcf (eth0) | CONTAINER (APP) | 0 |
+---------------+---------+------------------------------+-------------------------------------------------+-----------------+-----------+
| ubuntu | RUNNING | 172.17.0.1 (docker0) | | VIRTUAL-MACHINE | 6 |
| | | 100.119.118.117 (tailscale0) | | | |
| | | 10.206.212.178 (enp5s0) | | | |
+---------------+---------+------------------------------+-------------------------------------------------+-----------------+-----------+
| ubuntu-vm | RUNNING | 10.206.212.195 (enp5s0) | fd42:2624:e2f1:4a4a:216:3eff:fe9d:ecda (enp5s0) | VIRTUAL-MACHINE | 0 |
+---------------+---------+------------------------------+-------------------------------------------------+-----------------+-----------+
I also enabled this setting:
echo "net.ipv4.conf.all.forwarding=1" > /etc/sysctl.d/99-forwarding.conf
systemctl restart systemd-sysctl
Can you show incus config show --expanded
on one of the containers?
Here is config for the system container:
architecture: x86_64
config:
image.architecture: amd64
image.description: Ubuntu mantic amd64 (20240817_07:42)
image.os: Ubuntu
image.release: mantic
image.serial: "20240817_07:42"
image.type: squashfs
image.variant: default
volatile.base_image: 1625ca8294c9b96e4cca6abf03c1b0fae99cabd0c8c980c1823a5928884d674d
volatile.cloud-init.instance-id: 659550e4-b5c4-4be9-a3a8-212500f040bb
volatile.eth0.host_name: veth40189b57
volatile.eth0.hwaddr: 00:16:3e:21:b5:6b
volatile.eth0.name: eth0
volatile.idmap.base: "0"
volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.idmap: '[]'
volatile.last_state.power: RUNNING
volatile.last_state.ready: "false"
volatile.uuid: 89ac6010-87fa-4648-bb0b-d8691ebda004
volatile.uuid.generation: 89ac6010-87fa-4648-bb0b-d8691ebda004
devices:
eth0:
network: incusbr0
type: nic
root:
path: /
pool: default
size: 24GB
type: disk
shared-folder:
path: /mnt/dev
source: /home/decoder/dev
type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
and here for the app container
architecture: x86_64
config:
environment.DYNPKG_RELEASE: "2"
environment.HOME: /root
environment.NGINX_VERSION: 1.27.1
environment.NJS_RELEASE: "1"
environment.NJS_VERSION: 0.8.5
environment.PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
environment.PKG_RELEASE: "1"
environment.TERM: xterm
image.architecture: x86_64
image.description: docker.io/piotrzan/nginx-demo (OCI)
image.type: oci
volatile.base_image: ebba1e13c005ad395dc9b498ffca03210ccd540aec28f2c2dac979c7b13c6459
volatile.cloud-init.instance-id: d8fbe5f7-a67b-437b-9946-1668f3cf6837
volatile.container.oci: "true"
volatile.eth0.host_name: veth0a5342a6
volatile.eth0.hwaddr: 00:16:3e:cb:c5:49
volatile.idmap.base: "0"
volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.idmap: '[]'
volatile.last_state.power: RUNNING
volatile.uuid: 1258e7b1-06f8-4b3c-ae80-d74640e1c55a
volatile.uuid.generation: 1258e7b1-06f8-4b3c-ae80-d74640e1c55a
devices:
eth0:
name: eth0
network: incusbr0
type: nic
root:
path: /
pool: default
size: 24GB
type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
Every time I boot up I have to sudo umount /sys/fs/cgroup/net_cls
because of mullavad, see this issue.
@Piotr1215 please show:
On the host system.
There's nothing wrong looking in the container configuration, so it's most likely still some kind of firewalling getting in the way and applying to the container interfaces somehow.
Also, maybe run networkctl
and systemctl --failed
inside the system container to see if there's anything weird going on there.
@stgraber thank you for the pointers, I'm sure I have misconfigured something. Here are the command results:
and the commands from inside the container:
$ networkctl
IDX LINK TYPE OPERATIONAL SETUP
1 lo loopback carrier unmanaged
20 eth0 ether routable configured
2 links listed.
$ systemctl --failed
UNIT LOAD ACTIVE SUB DESCRIPTION
0 loaded units listed.
You have Docker stuff in there which is very much known for causing this kind of issue and is mentioned in our documentation for that very reason.
I have those settings already on:
➜ cat /etc/sysctl.d/99-forwarding.conf
───────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│ File: /etc/sysctl.d/99-forwarding.conf
───────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
1 │ net.ipv4.conf.all.forwarding=1
───────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
as well as port forwarding etc, this doesn't solve the issue. I cannot uninstall docker
.
However at least this narrows the problem down and good to know it's not a bug. I can always grab the IP with incus exec dev-container -- hostname -I | awk '{print $1}'
Required information
Issue description
Running an oci container doesn't create ipv4 in the
incus list
output. The ipv4 is otherwise generated:Steps to reproduce
incus remote add docker https://docker.io --protocol=oci
incus launch docker:piotrzan/nginx-demo:green green
incus config device add green eth0 nic network=incusbr0
incus list
should contain the ipv4 addressincus list
Information to attach
dmesg
)incus info NAME --show-log
)Resources: Processes: 6 CPU usage: CPU usage (in seconds): 0 Memory usage: Memory (current): 4.44MiB Network usage: eth0: Type: broadcast State: UP Host interface: veth26309907 MAC address: 00:16:3e:7b:f0:90 MTU: 1500 Bytes received: 2.76kB Bytes sent: 2.10kB Packets received: 23 Packets sent: 20 IP addresses: inet6: fd42:2624:e2f1:4a4a:216:3eff:fe7b:f090/64 (global) inet6: fe80::216:3eff:fe7b:f090/64 (link) lo: Type: loopback State: UP MTU: 65536 Bytes received: 0B Bytes sent: 0B Packets received: 0 Packets sent: 0 IP addresses: inet: 127.0.0.1/8 (local) inet6: ::1/128 (local)
Log:
lxc green 20240817154248.694 ERROR attach - ../src/lxc/attach.c:lxc_attach_run_command:1841 - No such file or directory - Failed to exec "dhclient"
incus monitor --pretty
while reproducing the issue)