Closed ddanon closed 7 months ago
Adding this in a comment so the length doesn't get in the way of the post.
incus info
config: {}
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- network_sriov
- console
- restrict_dev_incus
- migration_pre_copy
- infiniband
- dev_incus_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- dev_incus_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- backup_compression
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
- certificate_self_renewal
- instance_project_move
- storage_volume_project_move
- cloud_init
- network_dns_nat
- database_leader
- instance_all_projects
- clustering_groups
- ceph_rbd_du
- instance_get_full
- qemu_metrics
- gpu_mig_uuid
- event_project
- clustering_evacuation_live
- instance_allow_inconsistent_copy
- network_state_ovn
- storage_volume_api_filtering
- image_restrictions
- storage_zfs_export
- network_dns_records
- storage_zfs_reserve_space
- network_acl_log
- storage_zfs_blocksize
- metrics_cpu_seconds
- instance_snapshot_never
- certificate_token
- instance_nic_routed_neighbor_probe
- event_hub
- agent_nic_config
- projects_restricted_intercept
- metrics_authentication
- images_target_project
- cluster_migration_inconsistent_copy
- cluster_ovn_chassis
- container_syscall_intercept_sched_setscheduler
- storage_lvm_thinpool_metadata_size
- storage_volume_state_total
- instance_file_head
- instances_nic_host_name
- image_copy_profile
- container_syscall_intercept_sysinfo
- clustering_evacuation_mode
- resources_pci_vpd
- qemu_raw_conf
- storage_cephfs_fscache
- network_load_balancer
- vsock_api
- instance_ready_state
- network_bgp_holdtime
- storage_volumes_all_projects
- metrics_memory_oom_total
- storage_buckets
- storage_buckets_create_credentials
- metrics_cpu_effective_total
- projects_networks_restricted_access
- storage_buckets_local
- loki
- acme
- internal_metrics
- cluster_join_token_expiry
- remote_token_expiry
- init_preseed
- storage_volumes_created_at
- cpu_hotplug
- projects_networks_zones
- network_txqueuelen
- cluster_member_state
- instances_placement_scriptlet
- storage_pool_source_wipe
- zfs_block_mode
- instance_generation_id
- disk_io_cache
- amd_sev
- storage_pool_loop_resize
- migration_vm_live
- ovn_nic_nesting
- oidc
- network_ovn_l3only
- ovn_nic_acceleration_vdpa
- cluster_healing
- instances_state_total
- auth_user
- security_csm
- instances_rebuild
- numa_cpu_placement
- custom_volume_iso
- network_allocations
- zfs_delegate
- storage_api_remote_volume_snapshot_copy
- operations_get_query_all_projects
- metadata_configuration
- syslog_socket
- event_lifecycle_name_and_project
- instances_nic_limits_priority
- disk_initial_volume_configuration
- operation_wait
- image_restriction_privileged
- cluster_internal_custom_volume_copy
- disk_io_bus
- storage_cephfs_create_missing
- instance_move_config
- ovn_ssl_config
- certificate_description
- disk_io_bus_virtio_blk
- loki_config_instance
- instance_create_start
- clustering_evacuation_stop_options
- boot_host_shutdown_action
- agent_config_drive
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
auth_user_name: luc
auth_user_method: unix
environment:
addresses: []
architectures:
- x86_64
- i686
certificate: |
-----BEGIN CERTIFICATE-----
[redacted]
-----END CERTIFICATE-----
certificate_fingerprint: [redacted]
driver: qemu | lxc
driver_version: 8.2.1 | 5.0.3
firewall: nftables
kernel: Linux
kernel_architecture: x86_64
kernel_features:
idmapped_mounts: "true"
netnsid_getifaddrs: "true"
seccomp_listener: "true"
seccomp_listener_continue: "true"
uevent_injection: "true"
unpriv_fscaps: "true"
kernel_version: 6.7.5
lxc_features:
cgroup2: "true"
core_scheduling: "true"
devpts_fd: "true"
idmapped_mounts_v2: "true"
mount_injection_file: "true"
network_gateway_device_route: "true"
network_ipvlan: "true"
network_l2proxy: "true"
network_phys_macvlan_mtu: "true"
network_veth_router: "true"
pidfd: "true"
seccomp_allow_deny_syntax: "true"
seccomp_notify: "true"
seccomp_proxy_send_notify_fd: "true"
os_name: NixOS
os_version: "24.05"
project: default
server: incus
server_clustered: false
server_event_mode: full-mesh
server_name: processinator-nix
server_pid: 2342
server_version: 0.5.1
storage: btrfs
storage_version: 6.7.1
storage_supported_drivers:
- name: lvm
version: 2.03.23(2) (2023-11-21) / 1.02.197 (2023-11-21) / 4.48.0
remote: false
- name: btrfs
version: 6.7.1
remote: false
- name: dir
version: "1"
remote: false
Are you using Docker on the same host system?
That's the most common cause for what you're describing as Docker blocks all Incus traffic in its firewall rules...
If not, then maybe check if something else put incompatible iptables or nft rules in place.
Oh wow !
I was actually affected by this and was asking help in the nix discourse since my initial guess was that this is mostly nix related. Thank you so much for the person who reported this really amazing timing !
i tried disabling all my firewalls after reading your sugestion but that did not help
{ config, lib, pkgs, modulesPath, ... }:
{
# Configure network proxy if necessary
# networking.proxy.default = "http://user:password@proxy:port/";
# networking.proxy.noProxy = "127.0.0.1,localhost,internal.domain";
networking = {
firewall = {
# enable = true;
# allowedTCPPorts = [];
# allowedUDPPorts = [];
};
# Define your hostname.
hostName = "ymodt";
# Enable networking
networkmanager.enable = true;
# wireless.enable = true; # Enables wireless support via wpa_supplicant.
# https://github.com/NixOS/nixpkgs/issues/290427
# Configure the bridged interface (e.g., br0)
# interfaces.br0.useDHCP = lib.mkDefault true;
# bridges.br0.interfaces = [ "enp5s0" ]; # Adjust interface accordingly
};
}
I am indeed using docker on this host. here are my iptables after i did the above and did a rebuild.
$ iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-N nixos-fw
-N nixos-fw-accept
-N nixos-fw-log-refuse
-N nixos-fw-refuse
-A INPUT -j nixos-fw
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A nixos-fw -i lo -j nixos-fw-accept
-A nixos-fw -m conntrack --ctstate RELATED,ESTABLISHED -j nixos-fw-accept
-A nixos-fw -p tcp -m tcp --dport 22 -j nixos-fw-accept
-A nixos-fw -p udp -m udp --dport 5353 -j nixos-fw-accept
-A nixos-fw -p icmp -m icmp --icmp-type 8 -j nixos-fw-accept
-A nixos-fw -j nixos-fw-log-refuse
-A nixos-fw-accept -j ACCEPT
-A nixos-fw-log-refuse -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -j LOG --log-prefix "refused connection: " --log-level 6
-A nixos-fw-log-refuse -m pkttype ! --pkt-type unicast -j nixos-fw-refuse
-A nixos-fw-log-refuse -j nixos-fw-refuse
-A nixos-fw-refuse -j DROP
$ip6tables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N nixos-fw
-N nixos-fw-accept
-N nixos-fw-log-refuse
-N nixos-fw-refuse
-A INPUT -j nixos-fw
-A nixos-fw -i lo -j nixos-fw-accept
-A nixos-fw -m conntrack --ctstate RELATED,ESTABLISHED -j nixos-fw-accept
-A nixos-fw -p tcp -m tcp --dport 22 -j nixos-fw-accept
-A nixos-fw -p udp -m udp --dport 5353 -j nixos-fw-accept
-A nixos-fw -p ipv6-icmp -m icmp6 --icmpv6-type 137 -j DROP
-A nixos-fw -p ipv6-icmp -m icmp6 --icmpv6-type 139 -j DROP
-A nixos-fw -p ipv6-icmp -j nixos-fw-accept
-A nixos-fw -d fe80::/64 -p udp -m udp --dport 546 -j nixos-fw-accept
-A nixos-fw -j nixos-fw-log-refuse
-A nixos-fw-accept -j ACCEPT
-A nixos-fw-log-refuse -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -j LOG --log-prefix "refused connection: " --log-level 6
-A nixos-fw-log-refuse -m pkttype ! --pkt-type unicast -j nixos-fw-refuse
-A nixos-fw-log-refuse -j nixos-fw-refuse
-A nixos-fw-refuse -j DROP
Please let me know what i can do on this machine to give more context and potentially remove the incomplete tag. That is all i can think about for now
i uninstalled docker on this host as per the suggestion here i feel its much easier to start/stop multiple docker instances with docker. of course i still have to find how to expose ports running inside incus/docker to the host. Feels like inception !
However, i still do not have network connectivity inside the container even after removing docker.
assuming i can resolve this would you mind telling how i would expose the docker socket and the port of an an app inside docker automatically when i am starting the incus container/vm ? that would completely solve all my problems.
p.s Thank you for doing incus ! with nixos it almost feels like magic.
iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N nixos-fw
-N nixos-fw-accept
-N nixos-fw-log-refuse
-N nixos-fw-refuse
-A INPUT -j nixos-fw
-A nixos-fw -i lo -j nixos-fw-accept
-A nixos-fw -m conntrack --ctstate RELATED,ESTABLISHED -j nixos-fw-accept
-A nixos-fw -p tcp -m tcp --dport 22 -j nixos-fw-accept
-A nixos-fw -p udp -m udp --dport 5353 -j nixos-fw-accept
-A nixos-fw -p icmp -m icmp --icmp-type 8 -j nixos-fw-accept
-A nixos-fw -j nixos-fw-log-refuse
-A nixos-fw-accept -j ACCEPT
-A nixos-fw-log-refuse -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -j LOG --log-prefix "refused connection: " --log-level 6
-A nixos-fw-log-refuse -m pkttype ! --pkt-type unicast -j nixos-fw-refuse
-A nixos-fw-log-refuse -j nixos-fw-refuse
-A nixos-fw-refuse -j DROP
Are you using Docker on the same host system?
That's the most common cause for what you're describing as Docker blocks all Incus traffic in its firewall rules...
If not, then maybe check if something else put incompatible iptables or nft rules in place.
I never thought of that! I do not, but I have multiple other virtualization services on my machine. My virtualization.nix is below, which contains the only definitions of virtualization services on my machine.
{ config, pkgs, lib, ... }:
{
virtualisation = {
libvirtd.enable = true;
incus.enable = true;
waydroid.enable = true;
podman = {
enable = true;
dockerCompat = true; # alias docker="podman"
defaultNetwork.settings.dns_enabled = true; # necessary for container networking
};
};
}
...
networking.firewall.enable = true; # I rely on .openFirewall options to open ports, I don't do it manually
...
I'll spare the details of my full ip(6)tables but I did set up some logging and immediately found that my pings are being dropped.
Feb 23 09:07:46 processinator-nix kernel: IPTables-Dropped: IN= OUT=incusbr0 SRC=fe80:0000:0000:0000:0216:3eff:feb0:02db DST=fe80:0000:0000:0000:0216:3eff:fe24:8155 LEN=64 TC=0 HOPLIMIT=255 FLOWLBL=0 PROTO=ICMPv6 TYPE=136 CODE=0
Feb 23 09:07:51 processinator-nix kernel: IPTables-Dropped: IN= OUT=incusbr0 SRC=fe80:0000:0000:0000:0216:3eff:feb0:02db DST=fe80:0000:0000:0000:0216:3eff:fe24:8155 LEN=72 TC=0 HOPLIMIT=255 FLOWLBL=0 PROTO=ICMPv6 TYPE=135 CODE=0
I'll update this if I find the rule that blocks this and see if I can find a solution. Unclear to me at this time if the best path forward is to add an option in Incus initialization or to the nix packaging. If nix packaging would be be the best path forward I would suggest something like the option below.
virtualization.incus.podmanFirewallCompat=true; # modifies ip(6)table(s) to allow traffic to/from incus instances
The best path forward for continuity in nixpkgs in my mind would be to make a new linked issue over on that repo and keep the two somewhat in sync. I'm happy to contribute by going down that path, but would appreciate more feedback before I jump to that.
However, i still do not have network connectivity inside the container even after removing docker.
@ymolists -- possible the docker rules didn't get flushed yet? Use with caution, but may solve your issue.
iptables -F # flush all chains & rules
@ddanon i actually did remove docker altogether and rebooted. i pasted my firewall rules after that. my assumption is the default firwall from nix that is causing the problem. Its been long time i played with iptable rules :-) .. if you can take a look do you see anything that could be causing it ?
1) please provide what did you do to see it being dropped i wil ltry to do the same thing you did here and wee which rule is causing the drop
2) please tell me in nix (i am a newb in nix) what command you would use to enable incus traffic to flow first. i have to figure not only the iptable command but also the correct incantation in nix speak. It would seriously shorten my google search !
FTR iptables -F
does solve all the issues .. but it just says the problem is the firewall rules as you folks already pointed out ! Thank you for that suggestion at least we have a confirmation.
My point was that simply removing Docker from your config and rebuilding may not have flushed all Docker related rules from iptables. To be honest I didn't read your iptables. You should rebuild after manually flushing iptables (testing an iptables problem with a freshly flushed rule set won't help, as you've noted). This ensures that all Docker related rules are actually gone.
Honestly I didn't read your iptables dump :) logging iptables rules is a Google search away, I found it at the first link before I walked away from my computer. If I still had it in front of me I would send you a link. I hope that with both of us working to debug our own configs we can log out results here and arrive at a good solution for the project.
i actually rebooted. was that supposed to help ? also now there are no more docker rules in the iptables list. i still have the rules above from default nix firewall rules which i think are the ones causing the problem ?
@adamcstephens
Is the problem here that kernel ip forwarding isn't enabled?
boot.kernel.sysctl."net.ipv4.ip_forward" = 1;
If this is the problem, please let me know. I'll consider getting it added to the incus nixos module.
-→ cat /proc/sys/net/ipv4/ip_forward
1
-→ incus list
+-------+---------+------+-----------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------+---------+------+-----------------------------------------------+-----------+-----------+
| nix00 | RUNNING | | fd42:b8d8:ec7c:c832:7dd4:a24:815f:3142 (eth0) | CONTAINER | 0 |
| | | | fd42:b8d8:ec7c:c832:216:3eff:fea3:bee8 (eth0) | | |
+-------+---------+------+-----------------------------------------------+-----------+-----------+
I did that then rebooted. you can see the ip forwarding is enabled on the machine. Also (maybe a non issue) i notoced my incus container is not getting an ipv4 address assigned. should i be able to use the ipv6 interface ?
Regards
@adamcstephens -- thanks for the help! Unfortunately it did not solve my problem.
$ cat /proc/sys/net/ipv4/ip_forward
1
Steps I took:
incus exec nixos-stable-vm -- ping 1.1.1.1
-> ping: connect: Network is unreachableincus exec nixos-stable-vm -- ping google.com
-> ping: google.com: Temporary failure in name resolutionincus exec fedora39-container -- ping 1.1.1.1
-> ping: connect: Network is unreachableincus exec fedora39-container -- ping google.com
-> ping: google.com: Temporary failure in name resolutionI switched from using a nixos-stable vm just in case there is a configuration issue in the image that might be getting me into double firewall jeapordy here.
FTR I am very confident this is firewall related because i was able to get ping working inside the container without enabling the ip forwarding. All i did was remove all iptables rules with iptables -F
as per the suggestion from @ddanon
@adamcstephens
BTW thank you so much for jumping on this
So what i did was disable iptables rules and enabled nftable as per below. I also restarted the server and i am able to ping inside the container now. Incidently i am also getting an ipv4 address in incus now after that. Which is weird to me ? why is the interface tied to the firewall rules ?
networking = {
nftables.enable = true;
firewall = {
enable = false;
# allowedTCPPorts = [];
# allowedUDPPorts = [];
};
};
-→ incus list
+-------+---------+-----------------------+------------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------+---------+-----------------------+------------------------------------------------+-----------+-----------+
| nix00 | RUNNING | 10.252.113.106 (eth0) | fd42:b8d8:ec7c:c832:9383:b2a5:50ac:947c (eth0) | CONTAINER | 0 |
| | | | fd42:b8d8:ec7c:c832:216:3eff:fe81:b4c (eth0) | | |
+-------+---------+-----------------------+------------------------------------------------+-----------+-----------+
Did you add nftables.enable = true;
?
yes thats all i had to do . i did not have nftables before.
I saw people mentioning that they have to add extra input rules in nftables. i am disabling iptables for now until i can figure the nftables incantations to allow those extra input/forward rules for incus
@ddanon Want to try using nftables.enable
if you aren't already?
Otherwise, how is incusbr0
getting created? I believe using parent: incusbr0
tells Incus not to manage the bridge itself.
Can you provide the following info:
ip addr show
incus network list
indeed incusbr0 was created by incus init i think
-→ incus network list
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
| NAME | TYPE | MANAGED | IPV4 | IPV6 | DESCRIPTION | USED BY | STATE |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
| enp5s0 | physical | NO | | | | 0 | |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
| enp6s0 | physical | NO | | | | 0 | |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
| incusbr0 | bridge | YES | 10.252.113.1/24 | fd42:b8d8:ec7c:c832::1/64 | | 3 | CREATED |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
| wlp7s0 | physical | NO | | | | 0 | |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 04:42:1a:f1:ee:4f brd ff:ff:ff:ff:ff:ff
inet 192.168.2.10/24 brd 192.168.2.255 scope global dynamic noprefixroute enp5s0
valid_lft 171695sec preferred_lft 171695sec
inet6 fe80::7210:e505:ae22:ca8a/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: enp6s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 04:42:1a:f1:ee:4e brd ff:ff:ff:ff:ff:ff
4: wlp7s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether a6:0d:e2:00:fd:7c brd ff:ff:ff:ff:ff:ff permaddr 48:51:c5:7e:80:0b
5: incusbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:b8:9e:a9 brd ff:ff:ff:ff:ff:ff
inet 10.252.113.1/24 scope global incusbr0
valid_lft forever preferred_lft forever
inet6 fd42:b8d8:ec7c:c832::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:feb8:9ea9/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
9: veth6dc24884@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master incusbr0 state UP group default qlen 1000
link/ether 4a:15:6c:b6:dc:53 brd ff:ff:ff:ff:ff:ff link-netnsid 0
@ddanon Want to try using
nftables.enable
if you aren't already?Otherwise, how is
incusbr0
getting created? I believe usingparent: incusbr0
tells Incus not to manage the bridge itself.Can you provide the following info:
* `ip addr show` * `incus network list`
Thanks for your help @adamcstephens. I did add networking.nftables.enable = true;
to my config with no immediate changes.
❯ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether c8:7f:54:d0:10:db brd ff:ff:ff:ff:ff:ff
altname enp10s0
inet 10.202.1.77/24 brd 10.202.1.255 scope global dynamic noprefixroute eno1
valid_lft 40637sec preferred_lft 40637sec
inet6 fe80::4c18:1470:7243:a343/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: enp7s0f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether a0:36:9f:40:76:d4 brd ff:ff:ff:ff:ff:ff
4: enp7s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether a0:36:9f:40:76:d6 brd ff:ff:ff:ff:ff:ff
5: wlp11s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 5a:b0:4c:45:7a:ec brd ff:ff:ff:ff:ff:ff permaddr c8:94:02:70:8f:0d
6: tailscale0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1280 qdisc fq_codel state UNKNOWN group default qlen 500
link/none
inet 100.125.64.11/32 scope global tailscale0
valid_lft forever preferred_lft forever
inet6 fd7a:115c:a1e0::f43d:400b/128 scope global
valid_lft forever preferred_lft forever
inet6 fe80::fbaa:9c7f:fb98:d51d/64 scope link stable-privacy proto kernel_ll
valid_lft forever preferred_lft forever
7: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:99:1c:9d brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
19: podman0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 5e:40:8f:67:7a:69 brd ff:ff:ff:ff:ff:ff
inet 10.88.0.1/16 brd 10.88.255.255 scope global podman0
valid_lft forever preferred_lft forever
inet6 fe80::6c84:73ff:fe80:8847/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
20: veth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman0 state UP group default qlen 1000
link/ether ce:b6:dc:7a:29:02 brd ff:ff:ff:ff:ff:ff link-netns netns-59354b5f-3f44-c8af-cc82-bad95fbce1f8
inet6 fe80::d427:a1ff:fe77:854c/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
21: veth1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman0 state UP group default qlen 1000
link/ether 02:0b:b1:a4:e2:19 brd ff:ff:ff:ff:ff:ff link-netns netns-2cd6fa9e-972c-9171-7f6c-6d09a00f15ec
inet6 fe80::e450:5cff:fe41:461a/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
22: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UNKNOWN group default qlen 1000
link/ether fe:54:00:e4:a3:17 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fee4:a317/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
28: incusbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:b0:02:db brd ff:ff:ff:ff:ff:ff
inet 10.144.249.1/24 scope global incusbr0
valid_lft forever preferred_lft forever
inet6 fd42:ea3f:6012:3cd7::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:feb0:2db/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
34: tap7f1ab2bd: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master incusbr0 state UP group default qlen 1000
link/ether c6:8c:a8:b4:b8:99 brd ff:ff:ff:ff:ff:ff
36: vethe9f86ec4@if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master incusbr0 state UP group default qlen 1000
link/ether c2:2e:2f:64:e5:a7 brd ff:ff:ff:ff:ff:ff link-netnsid 2
❯ incus network list
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
| NAME | TYPE | MANAGED | IPV4 | IPV6 | DESCRIPTION | USED BY | STATE |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
| eno1 | physical | NO | | | | 0 | |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
| enp7s0f0 | physical | NO | | | | 0 | |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
| enp7s0f1 | physical | NO | | | | 0 | |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
| incusbr0 | bridge | YES | 10.144.249.1/24 | fd42:ea3f:6012:3cd7::1/64 | | 3 | CREATED |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
| podman0 | bridge | NO | | | | 0 | |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
| virbr0 | bridge | NO | | | | 0 | |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
| wlp11s0 | physical | NO | | | | 0 | |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
I neglected to mention that I do create the bridge in my config. From /etc/nixos/configuration.nix
:
networking.bridges = {
"br0" = {
interfaces = [ "eno1" "incusbr0" ];
};
};
@ddanon You probably want to use either nixos managed (remove incusbr0
and set parent: br0
) OR incus managed (delete br0
and set device/profile nic to network: incusbr0
). Right now you're using both. Just one concern of mine would be multiple dhcp servers.
@ddanon did you remove the iptables rules by doing firewall.enable = false;
? the only way i could get it to work was to disables all other forms of firewalls . Maybe you can check what your iptables or the nftables one look like. in my cas i only had the default firewalls so it was easy to test !
also check what profile you are using with your incus containers/vm ? here is mine its the default one which automatically gets applied to each container
-→ incus profile show default
config: {}
description: Default Incus profile
devices:
eth0:
name: eth0
network: incusbr0
type: nic
root:
path: /
pool: store00
type: disk
name: default
used_by:
- /1.0/instances/nix00
@ddanon You probably want to use either nixos managed (remove
incusbr0
and setparent: br0
) OR incus managed (deletebr0
and set device/profile nic tonetwork: incusbr0
). Right now you're using both. Just one concern of mine would be multiple dhcp servers.
That makes perfect sense! I didn't realize I doubled it up and had that overlap. From memory I think I went down that path because of this same problem in the first place and just never undid it. For my context, is there a DHCP server within Incus? I assume so since I don't have an IPv6 DHCP server running on my network.
I have changed my configuration to the following to remove the overlap:
networking.bridges = {
"br0" = {
interfaces = [ "eno1" ];
};
};
For context, NIC eno1 is the physical NIC that is wired to my router. After a reboot, I tried to set the parent of incusbr0 to br0 like so:
❯ incus network set incusbr0 parent=br0
but I got this error: Error: Invalid option for network "incusbr0" option "parent"
maybe I misunderstood? Unfortunately I'm seeing the same behavior with the new bridge configuration after the reboot.
Edit: according to the bridge network docs there is not parent for a bridge network so the behavior makes sense. Looking back I was blending the two distinct approaches @adamcstephens described. I think this is what you were looking for (although unfortunately it yields a similarly unsatisfying result):
❯ incus network attach incusbr0 f39
Error: Failed add validation for device "incusbr0": Instance DNS name "f39" conflict between "incusbr0" and "eth0" because both are connected to same network
I'm closing this as there doesn't appear to be an actual Incus bug, rather interactions with the firewalling and networking aspects of Nix. You're still definitely welcome to keep posting your findings here and I'll still be reading new comments, as I'm sure @adamcstephens will too!
In the future, such configuration/environment questions tend to be better handled on the community forum at https://discuss.linuxcontainers.org as that has a large user community who may be able to help with their own findings and is also in general better indexed by search engine after resolution, making it easier for anyone else affected to find it.
@stgraber would you mind directing me to how/where incus sets its iptables/nftables in the code please ? i can read go code and try to figure what is happening.
internal/server/firewall
Required information
Issue description
Following the documentation , I set up my first Incus VM with Nixos Stable (images:nixos/23.11 [name] --vm). I see that three IPV6 addresses are assigned to the single NIC that is in the VM by default. Unfortunately, I am unable to reach the internet (which prevents me from changing configurations in NixOS). I'm including some of the relevant logs on this initial post, but not going too far overboard into detail because I assume it's a simple fix. Maybe all that is needed is a little clarification in the docs?
Steps to reproduce
incus.enable = true;
to nixos config"incus-admin"
group to a user (if necessary)sudo incus admin init
incus launch images:nixos/23.11 [name] --vm
incus config set [name] security.secureboot=false
incus start [name]
ping 1.1.1.1
, see errorHost Unreachable
Information to attach
dmesg
)incus info NAME --show-log
)incus config show NAME --expanded
)incus monitor --pretty
while reproducing the issue)