lxc / incus

Powerful system container and virtual machine manager
https://linuxcontainers.org/incus
Apache License 2.0
2.71k stars 223 forks source link

IPv4 address missing from the `incus list` output #1133

Closed Piotr1215 closed 2 months ago

Piotr1215 commented 2 months ago

Required information

config: {}
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- network_sriov
- console
- restrict_dev_incus
- migration_pre_copy
- infiniband
- dev_incus_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- dev_incus_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- backup_compression
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
- certificate_self_renewal
- instance_project_move
- storage_volume_project_move
- cloud_init
- network_dns_nat
- database_leader
- instance_all_projects
- clustering_groups
- ceph_rbd_du
- instance_get_full
- qemu_metrics
- gpu_mig_uuid
- event_project
- clustering_evacuation_live
- instance_allow_inconsistent_copy
- network_state_ovn
- storage_volume_api_filtering
- image_restrictions
- storage_zfs_export
- network_dns_records
- storage_zfs_reserve_space
- network_acl_log
- storage_zfs_blocksize
- metrics_cpu_seconds
- instance_snapshot_never
- certificate_token
- instance_nic_routed_neighbor_probe
- event_hub
- agent_nic_config
- projects_restricted_intercept
- metrics_authentication
- images_target_project
- images_all_projects
- cluster_migration_inconsistent_copy
- cluster_ovn_chassis
- container_syscall_intercept_sched_setscheduler
- storage_lvm_thinpool_metadata_size
- storage_volume_state_total
- instance_file_head
- instances_nic_host_name
- image_copy_profile
- container_syscall_intercept_sysinfo
- clustering_evacuation_mode
- resources_pci_vpd
- qemu_raw_conf
- storage_cephfs_fscache
- network_load_balancer
- vsock_api
- instance_ready_state
- network_bgp_holdtime
- storage_volumes_all_projects
- metrics_memory_oom_total
- storage_buckets
- storage_buckets_create_credentials
- metrics_cpu_effective_total
- projects_networks_restricted_access
- storage_buckets_local
- loki
- acme
- internal_metrics
- cluster_join_token_expiry
- remote_token_expiry
- init_preseed
- storage_volumes_created_at
- cpu_hotplug
- projects_networks_zones
- network_txqueuelen
- cluster_member_state
- instances_placement_scriptlet
- storage_pool_source_wipe
- zfs_block_mode
- instance_generation_id
- disk_io_cache
- amd_sev
- storage_pool_loop_resize
- migration_vm_live
- ovn_nic_nesting
- oidc
- network_ovn_l3only
- ovn_nic_acceleration_vdpa
- cluster_healing
- instances_state_total
- auth_user
- security_csm
- instances_rebuild
- numa_cpu_placement
- custom_volume_iso
- network_allocations
- zfs_delegate
- storage_api_remote_volume_snapshot_copy
- operations_get_query_all_projects
- metadata_configuration
- syslog_socket
- event_lifecycle_name_and_project
- instances_nic_limits_priority
- disk_initial_volume_configuration
- operation_wait
- image_restriction_privileged
- cluster_internal_custom_volume_copy
- disk_io_bus
- storage_cephfs_create_missing
- instance_move_config
- ovn_ssl_config
- certificate_description
- disk_io_bus_virtio_blk
- loki_config_instance
- instance_create_start
- clustering_evacuation_stop_options
- boot_host_shutdown_action
- agent_config_drive
- network_state_ovn_lr
- image_template_permissions
- storage_bucket_backup
- storage_lvm_cluster
- shared_custom_block_volumes
- auth_tls_jwt
- oidc_claim
- device_usb_serial
- numa_cpu_balanced
- image_restriction_nesting
- network_integrations
- instance_memory_swap_bytes
- network_bridge_external_create
- network_zones_all_projects
- storage_zfs_vdev
- container_migration_stateful
- profiles_all_projects
- instances_scriptlet_get_instances
- instances_scriptlet_get_cluster_members
- instances_scriptlet_get_project
- network_acl_stateless
- instance_state_started_at
- networks_all_projects
- network_acls_all_projects
- storage_buckets_all_projects
- resources_load
- instance_access
- project_access
- projects_force_delete
- resources_cpu_flags
- disk_io_bus_cache_filesystem
- instance_oci
- clustering_groups_config
- instances_lxcfs_per_instance
- clustering_groups_vm_cpu_definition
- disk_volume_subpath
- projects_limits_disk_pool
- network_ovn_isolated
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
auth_user_name: decoder
auth_user_method: unix
environment:
  addresses: []
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    MIIB/DCCAYOgAwIBAgIQQ6AdfjNMsrBrEV4jS/OsMTAKBggqhkjOPQQDAzAxMRkw
    FwYDVQQKExBMaW51eCBDb250YWluZXJzMRQwEgYDVQQDDAtyb290QHBvcC1vczAe
    Fw0yNDA2MDQwODE1MjFaFw0zNDA2MDIwODE1MjFaMDExGTAXBgNVBAoTEExpbnV4
    IENvbnRhaW5lcnMxFDASBgNVBAMMC3Jvb3RAcG9wLW9zMHYwEAYHKoZIzj0CAQYF
    K4EEACIDYgAEQTXmCRwFyc1z13Y2EonmDz0z2qXSLBx1TOFsY+c+Rkb9NZ4+0Dk6
    KBuxwZ8biZ8+UbGFg1/aKh32pVvGPd+MU5Q3G3tHuNxPJyPAl2tOeC8nCcATY4FA
    DnHcUrkarCofo2AwXjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUH
    AwEwDAYDVR0TAQH/BAIwADApBgNVHREEIjAgggZwb3Atb3OHBH8AAAGHEAAAAAAA
    AAAAAAAAAAAAAAEwCgYIKoZIzj0EAwMDZwAwZAIwYXwCtAjExyRIIKmS7xPFmaoj
    HSrLZArxETBzVSpCnQ7FSTrprNoE2UdrVo2yGg2LAjB1KD4pbXUBA+juWnlkMPJE
    j5VvkpUEfGySSpTXgDDANOw1tz75Cw/LBCsGvvMZ10o=
    -----END CERTIFICATE-----
  certificate_fingerprint: 121aeb40215ea606bcd922944e4178ef2db9dff6420bdb8edb2b1967d95d6aa2
  driver: lxc | qemu
  driver_version: 6.0.1 | 9.0.2
  firewall: nftables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    idmapped_mounts: "true"
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    uevent_injection: "true"
    unpriv_binfmt: "true"
    unpriv_fscaps: "true"
  kernel_version: 6.9.3-76060903-generic
  lxc_features:
    cgroup2: "true"
    core_scheduling: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Pop!_OS
  os_version: "22.04"
  project: default
  server: incus
  server_clustered: false
  server_event_mode: full-mesh
  server_name: pop-os
  server_pid: 2427
  server_version: "6.4"
  storage: dir
  storage_version: "1"
  storage_supported_drivers:
  - name: btrfs
    version: 5.16.2
    remote: false
  - name: dir
    version: "1"
    remote: false
  - name: lvm
    version: 2.03.11(2) (2021-01-08) / 1.02.175 (2021-01-08) / 4.48.0
    remote: false
  - name: lvmcluster
    version: 2.03.11(2) (2021-01-08) / 1.02.175 (2021-01-08) / 4.48.0
    remote: true

Issue description

Running an oci container doesn't create ipv4 in the incus list output. The ipv4 is otherwise generated:

➜ incus exec green -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
40: eth0@if41: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether 00:16:3e:7b:f0:90 brd ff:ff:ff:ff:ff:ff
    inet 10.206.212.212/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fd42:2624:e2f1:4a4a:216:3eff:fe7b:f090/64 scope global dynamic flags 100
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe7b:f090/64 scope link
       valid_lft forever preferred_lft forever

Steps to reproduce

  1. Add docker as oci repository: incus remote add docker https://docker.io --protocol=oci
  2. Create a sample container: incus launch docker:piotrzan/nginx-demo:green green
  3. Add to network: incus config device add green eth0 nic network=incusbr0
  4. Expected result of incus list should contain the ipv4 address
  5. Actual result of incus list
+--------+---------+-------------------------+-----------------------------------------------+-----------------+-----------+
|  NAME  |  STATE  |          IPV4           |                     IPV6                      |      TYPE       | SNAPSHOTS |
+--------+---------+-------------------------+-----------------------------------------------+-----------------+-----------+
| green  | RUNNING |                         | fd42:2624:e2f1:4a4a:216:3eff:fe7b:f090 (eth0) | CONTAINER (APP) | 0         |
+--------+---------+-------------------------+-----------------------------------------------+-----------------+-----------+
| ubuntu | RUNNING | 172.17.0.1 (docker0)    |                                               | VIRTUAL-MACHINE | 5         |
|        |         | 10.206.212.178 (enp5s0) |                                               |                 |           |
+--------+---------+-------------------------+-----------------------------------------------+-----------------+-----------+

Information to attach

Resources: Processes: 6 CPU usage: CPU usage (in seconds): 0 Memory usage: Memory (current): 4.44MiB Network usage: eth0: Type: broadcast State: UP Host interface: veth26309907 MAC address: 00:16:3e:7b:f0:90 MTU: 1500 Bytes received: 2.76kB Bytes sent: 2.10kB Packets received: 23 Packets sent: 20 IP addresses: inet6: fd42:2624:e2f1:4a4a:216:3eff:fe7b:f090/64 (global) inet6: fe80::216:3eff:fe7b:f090/64 (link) lo: Type: loopback State: UP MTU: 65536 Bytes received: 0B Bytes sent: 0B Packets received: 0 Packets sent: 0 IP addresses: inet: 127.0.0.1/8 (local) inet6: ::1/128 (local)

Log:

lxc green 20240817154248.694 ERROR attach - ../src/lxc/attach.c:lxc_attach_run_command:1841 - No such file or directory - Failed to exec "dhclient"

 - [x] Container configuration (`incus config show NAME --expanded`)
```bash
architecture: x86_64
config:
  environment.HOME: /root
  environment.NGINX_VERSION: 1.27.0
  environment.NJS_RELEASE: "2"
  environment.NJS_VERSION: 0.8.4
  environment.PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  environment.PKG_RELEASE: "2"
  environment.TERM: xterm
  image.architecture: x86_64
  image.description: docker.io/piotrzan/nginx-demo (OCI)
  image.type: oci
  volatile.base_image: 89406edf6357bc933b1d33ec13ac24cb6405c79b4e336d137382b47591e437e2
  volatile.cloud-init.instance-id: e6996572-6b8b-4a20-8257-4afe9d5021b5
  volatile.container.oci: "true"
  volatile.eth0.host_name: veth26309907
  volatile.eth0.hwaddr: 00:16:3e:7b:f0:90
  volatile.eth0.name: eth0
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.last_state.ready: "false"
  volatile.uuid: 320243ae-2b04-4664-97ed-e1f3ca29fa42
  volatile.uuid.generation: 320243ae-2b04-4664-97ed-e1f3ca29fa42
devices:
  eth0:
    network: incusbr0
    type: nic
  root:
    path: /
    pool: default
    size: 24GB
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
stgraber commented 2 months ago

That's normal. This adds a network interface but OCI containers unlike system containers do not have a network management daemon that can react to a new network interface and configure it.

Piotr1215 commented 2 months ago

I thought that it actually works, it worked on a video here:

https://youtu.be/HiJlS7QHrYI?t=658

image

Maybe this is due to some additional configuration.

stgraber commented 2 months ago

It didn't work above because you did "launch, add, list" so the network interface was hot plugged into a running instance and so didn't get configured. If you just launch with a network interface already configured, then it will work fine.

Similarly, just running incus restart on your other instance would have fixed it.

Piotr1215 commented 2 months ago

Thank you for taking the time to help me. I must be doing something wrong, because no matter how I approach it, system containers or oci containers don't get ipv4 assigned, but get ipv6 assigned. The IPv4 is assigned correctly to both system container and oci app:

➜ incus exec dev-container -- ip a show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:21:b5:6b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.206.212.100/24 metric 1024 brd 10.206.212.255 scope global dynamic eth0
       valid_lft 3183sec preferred_lft 3183sec
    inet6 fd42:2624:e2f1:4a4a:216:3eff:fe21:b56b/64 scope global mngtmpaddr noprefixroute
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe21:b56b/64 scope link
       valid_lft forever preferred_lft forever
➜ incus exec green -- ip a show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether 00:16:3e:ef:9b:e2 brd ff:ff:ff:ff:ff:ff
    inet 10.206.212.174/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fd42:2624:e2f1:4a4a:216:3eff:feef:9be2/64 scope global dynamic flags 100
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:feef:9be2/64 scope link
       valid_lft forever preferred_lft forever

Only vm has ipv4 addresses assigned, but only when I attach console to it.

I've stumbled upon an issue where a specific kernel version was affecting the IPv4 addresses display, maybe this is what is happening here?

➜ incus list
+---------------+---------+-------------------------+-----------------------------------------------+-----------------+-----------+
|     NAME      |  STATE  |          IPV4           |                     IPV6                      |      TYPE       | SNAPSHOTS |
+---------------+---------+-------------------------+-----------------------------------------------+-----------------+-----------+
| dev-container | RUNNING |                         | fd42:2624:e2f1:4a4a:216:3eff:fe21:b56b (eth0) | CONTAINER       | 0         |
+---------------+---------+-------------------------+-----------------------------------------------+-----------------+-----------+
| green         | RUNNING |                         | fd42:2624:e2f1:4a4a:216:3eff:feef:9be2 (eth0) | CONTAINER (APP) | 0         |
+---------------+---------+-------------------------+-----------------------------------------------+-----------------+-----------+
| ubuntu        | RUNNING | 172.17.0.1 (docker0)    |                                               | VIRTUAL-MACHINE | 5         |
|               |         | 10.206.212.178 (enp5s0) |                                               |                 |           |
+---------------+---------+-------------------------+-----------------------------------------------+-----------------+-----------+
stgraber commented 2 months ago

When only being IPv6, the culprit is almost always a firewall, whether it's your distribution's use of firewalld/ufw or a Docker on your system blocking every other platform from accessing the network.

https://linuxcontainers.org/incus/docs/main/howto/network_bridge_firewalld/

Piotr1215 commented 2 months ago

Thank you! After modifying the ufw rules the IPv4 gets assigned to the virtual-machine on start and showed in the output of incus list.

However, the system container and oci still only have IPv6 addresses and not IPv4.

┌Every───┐┌Command──────────────────────────────────────────────────────────────────────────────────────────────────────────┐┌Time───────────────┐
│2s      ││incus list                                                                                                       ││2024-08-18 17:43:50│
└────────┘└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘└───────────────────┘
+---------------+---------+------------------------------+-------------------------------------------------+-----------------+-----------+
|     NAME      |  STATE  |             IPV4             |                      IPV6                       |      TYPE       | SNAPSHOTS |
+---------------+---------+------------------------------+-------------------------------------------------+-----------------+-----------+
| dev-container | RUNNING |                              | fd42:2624:e2f1:4a4a:216:3eff:fe21:b56b (eth0)   | CONTAINER       | 0         |
+---------------+---------+------------------------------+-------------------------------------------------+-----------------+-----------+
| green         | RUNNING |                              | fd42:2624:e2f1:4a4a:216:3eff:fe4b:2bcf (eth0)   | CONTAINER (APP) | 0         |
+---------------+---------+------------------------------+-------------------------------------------------+-----------------+-----------+
| ubuntu        | RUNNING | 172.17.0.1 (docker0)         |                                                 | VIRTUAL-MACHINE | 6         |
|               |         | 100.119.118.117 (tailscale0) |                                                 |                 |           |
|               |         | 10.206.212.178 (enp5s0)      |                                                 |                 |           |
+---------------+---------+------------------------------+-------------------------------------------------+-----------------+-----------+
| ubuntu-vm     | RUNNING | 10.206.212.195 (enp5s0)      | fd42:2624:e2f1:4a4a:216:3eff:fe9d:ecda (enp5s0) | VIRTUAL-MACHINE | 0         |
+---------------+---------+------------------------------+-------------------------------------------------+-----------------+-----------+

I also enabled this setting:

echo "net.ipv4.conf.all.forwarding=1" > /etc/sysctl.d/99-forwarding.conf
systemctl restart systemd-sysctl
stgraber commented 2 months ago

Can you show incus config show --expanded on one of the containers?

Piotr1215 commented 2 months ago

Here is config for the system container:

architecture: x86_64
config:
  image.architecture: amd64
  image.description: Ubuntu mantic amd64 (20240817_07:42)
  image.os: Ubuntu
  image.release: mantic
  image.serial: "20240817_07:42"
  image.type: squashfs
  image.variant: default
  volatile.base_image: 1625ca8294c9b96e4cca6abf03c1b0fae99cabd0c8c980c1823a5928884d674d
  volatile.cloud-init.instance-id: 659550e4-b5c4-4be9-a3a8-212500f040bb
  volatile.eth0.host_name: veth40189b57
  volatile.eth0.hwaddr: 00:16:3e:21:b5:6b
  volatile.eth0.name: eth0
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.last_state.ready: "false"
  volatile.uuid: 89ac6010-87fa-4648-bb0b-d8691ebda004
  volatile.uuid.generation: 89ac6010-87fa-4648-bb0b-d8691ebda004
devices:
  eth0:
    network: incusbr0
    type: nic
  root:
    path: /
    pool: default
    size: 24GB
    type: disk
  shared-folder:
    path: /mnt/dev
    source: /home/decoder/dev
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

and here for the app container

architecture: x86_64
config:
  environment.DYNPKG_RELEASE: "2"
  environment.HOME: /root
  environment.NGINX_VERSION: 1.27.1
  environment.NJS_RELEASE: "1"
  environment.NJS_VERSION: 0.8.5
  environment.PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  environment.PKG_RELEASE: "1"
  environment.TERM: xterm
  image.architecture: x86_64
  image.description: docker.io/piotrzan/nginx-demo (OCI)
  image.type: oci
  volatile.base_image: ebba1e13c005ad395dc9b498ffca03210ccd540aec28f2c2dac979c7b13c6459
  volatile.cloud-init.instance-id: d8fbe5f7-a67b-437b-9946-1668f3cf6837
  volatile.container.oci: "true"
  volatile.eth0.host_name: veth0a5342a6
  volatile.eth0.hwaddr: 00:16:3e:cb:c5:49
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 1258e7b1-06f8-4b3c-ae80-d74640e1c55a
  volatile.uuid.generation: 1258e7b1-06f8-4b3c-ae80-d74640e1c55a
devices:
  eth0:
    name: eth0
    network: incusbr0
    type: nic
  root:
    path: /
    pool: default
    size: 24GB
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

Every time I boot up I have to sudo umount /sys/fs/cgroup/net_cls because of mullavad, see this issue.

stgraber commented 2 months ago

@Piotr1215 please show:

On the host system.

There's nothing wrong looking in the container configuration, so it's most likely still some kind of firewalling getting in the way and applying to the container interfaces somehow.

Also, maybe run networkctl and systemctl --failed inside the system container to see if there's anything weird going on there.

Piotr1215 commented 2 months ago

@stgraber thank you for the pointers, I'm sure I have misconfigured something. Here are the command results:

and the commands from inside the container:

$ networkctl
IDX LINK TYPE     OPERATIONAL SETUP
  1 lo   loopback carrier     unmanaged
 20 eth0 ether    routable    configured

2 links listed.

$ systemctl --failed
  UNIT LOAD ACTIVE SUB DESCRIPTION
0 loaded units listed.
stgraber commented 2 months ago

You have Docker stuff in there which is very much known for causing this kind of issue and is mentioned in our documentation for that very reason.

https://linuxcontainers.org/incus/docs/main/howto/network_bridge_firewalld/#prevent-connectivity-issues-with-incus-and-docker

Piotr1215 commented 2 months ago

I have those settings already on:

➜ cat /etc/sysctl.d/99-forwarding.conf
───────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
       │ File: /etc/sysctl.d/99-forwarding.conf
───────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
   1   │ net.ipv4.conf.all.forwarding=1
───────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

as well as port forwarding etc, this doesn't solve the issue. I cannot uninstall docker.

However at least this narrows the problem down and good to know it's not a bug. I can always grab the IP with incus exec dev-container -- hostname -I | awk '{print $1}'