canonical / lxd

Powerful system container and virtual machine manager
https://canonical.com/lxd
GNU Affero General Public License v3.0
4.37k stars 930 forks source link

Is LXD altering the /etc/resolv.conf file? #9610

Closed randombenj closed 2 years ago

randombenj commented 2 years ago

Required information

config:
  images.auto_update_interval: "1"
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- candid_authentication
- backup_compression
- candid_config
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- candid_config_key
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- rbac
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
environment:
  addresses: []
  architectures:
  - x86_64
  - i686
  certificate: |
    ...
  certificate_fingerprint: b742be0f073a13465cada3a4ab3092b43a2db8900c4bb6198ede906a21601c61
  driver: lxc | qemu
  driver_version: 4.0.11 | 6.1.0
  firewall: xtables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    shiftfs: "false"
    uevent_injection: "true"
    unpriv_fscaps: "true"
  kernel_version: 5.10.0-1051-oem
  lxc_features:
    cgroup2: "true"
    core_scheduling: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Ubuntu
  os_version: "20.04"
  project: default
  server: lxd
  server_clustered: false
  server_name: rrouwprlc0011
  server_pid: 5773
  server_version: "4.20"
  storage: zfs
  storage_version: 2.0.2-1ubuntu5
  storage_supported_drivers:
  - name: btrfs
    version: 5.4.1
    remote: false
  - name: cephfs
    version: 15.2.14
    remote: true
  - name: dir
    version: "1"
    remote: false
  - name: lvm
    version: 2.03.07(2) (2019-11-30) / 1.02.167 (2019-11-30) / 4.43.0
    remote: false
  - name: zfs
    version: 2.0.2-1ubuntu5
    remote: false
  - name: ceph
    version: 15.2.14
    remote: true

Issue description

When (having to) use pulsesecure (vpn client) it alters the /etc/resolv.conf file. However when running lxc delte INSTANCE lxd also seems to alter the /etc/resolv.conf file. Is this intended behavour? Here is what happens:

# standard system config
$ cat /etc/resolv.conf            
# Generated by NetworkManager
nameserver 1.1.1.1
nameserver 9.9.9.9
# connecting to pulse endpoint ...
$ cat /etc/resolv.conf
search company.com ...
nameserver X.X.X.X
nameserver Y.Y.Y.Y
$ lxc delete -f stirring-cheetah
$ cat /etc/resolv.conf          
# Generated by NetworkManager
nameserver 1.1.1.1
nameserver 9.9.9.9

Steps to reproduce

see above ^

Information to attach

tomponline commented 2 years ago

What is X.X.X.X and Y.Y.Y.Y in your example?

But no, LXD does not modify /etc/resolv.conf to my knowledge.

randombenj commented 2 years ago

@tomponline That was fast :tada:

It's just company dns addresses, doesn't really matter, it gets generated by pulsesecure. What does matter is that it's different after running lxc delete INSTANCE (doesn't happen with vms apparently).

tomponline commented 2 years ago

Most likely pulsesecure is modifying your DNS as it detects the container's host side interface being created. This sounds like a pulsesecure config/feature issue.

randombenj commented 2 years ago

I don't know if this is the propper way to test it:

lxc info asdf

Name: asdf
Status: RUNNING
Type: container
Architecture: x86_64
PID: 46746
Created: 2021/12/02 09:44 CET
Last Used: 2021/12/02 09:44 CET

Resources:
  Processes: 51
  Disk usage:
    root: 8.24MiB
  CPU usage:
    CPU usage (in seconds): 15
  Memory usage:
    Memory (current): 214.61MiB
    Memory (peak): 256.79MiB
  Network usage:
    eth0:
      Type: broadcast
      State: UP
      Host interface: vetha5c1c6bb
      MAC address: 00:16:3e:f4:7e:b8
      MTU: 1500
      Bytes received: 23.54kB
      Bytes sent: 11.52kB
      Packets received: 65
      Packets sent: 72
      IP addresses:
        inet:  10.243.201.30/24 (global)
        inet6: fd42:74bf:7b0f:f323:216:3eff:fef4:7eb8/64 (global)
        inet6: fe80::216:3eff:fef4:7eb8/64 (link)
    lo:
      Type: loopback
      State: UP
      MTU: 65536
      Bytes received: 1.44kB
      Bytes sent: 1.44kB
      Packets received: 16
      Packets sent: 16
      IP addresses:
        inet:  127.0.0.1/8 (local)
        inet6: ::1/128 (local)
sudo ip link delete vetha5c1c6bb
lxc info asdf
Name: asdf
Status: RUNNING
Type: container
Architecture: x86_64
PID: 46746
Created: 2021/12/02 09:44 CET
Last Used: 2021/12/02 09:44 CET

Resources:
  Processes: 52
  Disk usage:
    root: 8.24MiB
  CPU usage:
    CPU usage (in seconds): 15
  Memory usage:
    Memory (current): 215.76MiB
    Memory (peak): 256.79MiB
  Network usage:
    lo:
      Type: loopback
      State: UP
      MTU: 65536
      Bytes received: 1.44kB
      Bytes sent: 1.44kB
      Packets received: 16
      Packets sent: 16
      IP addresses:
        inet:  127.0.0.1/8 (local)
        inet6: ::1/128 (local)

Does not change the /etc/resolv.conf

edit: But your assumption that it might be a pulsesecure issue is also very likely

tomponline commented 2 years ago

Does the VPN normally modify the DNS when it starts/connects? What process set it to 8.8.8.8?

I'm thinking that when the container starts, it creates a veth pair between the host and the container, which the system will see as a new interface being added and perhaps thats triggering the VPN client to re-apply its DNS settings.

randombenj commented 2 years ago

Does the VPN normally modify the DNS when it starts/connects? What process set it to 8.8.8.8?

Yes the VPN changes the /etc/resolv.conf settings when it starts. I tryed strace on the lxc client wich does not touch the file, but didn't check the daemon.

The strange thing is, creating a container does not change the DNS config.

I now disabled everything pulse related (client and service). Would you mind trying to reproduce this:

$ lxc launch ubuntu:20.04 hello-from-the-otter-slide
# change your /etc/resolv.conf file to a different DNS
$ cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 8.8.8.8
$ lxc delete -f hello-from-the-otter-slide
$ cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 1.1.1.1
nameserver 9.9.9.9
tomponline commented 2 years ago

LXD doesn't modify global DNS settings.

Please show output of lxc config show <instance> --expanded.

If you're using lxdbr0 with only one container, when it stops the lxdbr0 bridge interface will go down, potentially trigger your VPN client to reconfigure the global settings.

randombenj commented 2 years ago
$ lxc launch ubuntu:20.04 hello-from-the-otter-slide                                                    [1]
Creating hello-from-the-otter-slide
Starting hello-from-the-otter-slide
$ lxc config show hello-from-the-otter-slide --expanded
architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 20.04 LTS amd64 (release) (20211129)
  image.label: release
  image.os: ubuntu
  image.release: focal
  image.serial: "20211129"
  image.type: squashfs
  image.version: "20.04"
  volatile.base_image: a8402324842148ccfcbacbc69bf251baa9703916593089f0609e8d45e3185bff
  volatile.eth0.host_name: veth8cabfb1c
  volatile.eth0.hwaddr: 00:16:3e:21:51:09
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: a481246d-d7da-495c-b802-9d796491882a
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: "
tomponline commented 2 years ago

Do you have just a single container running/stopping connected to lxdbr0?

Does the issue still occur if you have two containers running, and then stop just one of them (thus leaving lxdbr0 up)?

tomponline commented 2 years ago

Also, to help narrow down the issue, does it also happen with lxc stop -f <instance> as opposed to lxc delete -f <instance>?

randombenj commented 2 years ago

I just tryed to do the same thing in a nested ubuntu:20.04 container and get the same behaviour (with systemd managed DNS):

root@blessed-puma:~# sudo vi /etc/resolv.conf 
root@blessed-puma:~# cat /etc/resolv.conf 
nameserver 8.8.8.8
root@blessed-puma:~# lxc delete -f asdf
root@blessed-puma:~# cat /etc/resolv.conf 
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.

nameserver 127.0.0.53
options edns0 trust-ad
search lxd
root@blessed-puma:~# 

Also, to help narrow down the issue, does it also happen with lxc stop -f as opposed to lxc delete -f ?

Yes the same happens also when only stopping the container

tomponline commented 2 years ago

You didnt answer my question though. https://github.com/lxc/lxd/issues/9610#issuecomment-984430831

randombenj commented 2 years ago

Oh sorry missed that :)

Do you have just a single container running/stopping connected to lxdbr0?

No there are a few connected to lxdbr0:

$ lxc network show lxdbr0
config:
  ipv4.address: 10.243.201.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:74bf:7b0f:f323::1/64
  ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/advanced-parakeet
- /1.0/instances/asdf
- /1.0/instances/blessed-puma
- /1.0/instances/buster
- /1.0/instances/docker
- /1.0/instances/hello-from-the-otter-slide
- /1.0/instances/lxd-00a7f780-f54f-4bfd-adcb-a6626bbd0b51
- /1.0/instances/lxd-27c10684-7d23-440a-86fc-0c87b5cd7a84
- /1.0/instances/lxd-8e3e7e26-dbf9-49b3-8046-daa508ad525d
- /1.0/instances/minikube
- /1.0/instances/podman
- /1.0/profiles/default
managed: true
status: Created
locations:
- none

Does the issue still occur if you have two containers running, and then stop just one of them (thus leaving lxdbr0 up)?

Yes this still happens (I deleted all the running containers/vms from before):

$ lxc ls
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
$ lxc launch ubuntu:20.04 c1
Creating c1
Starting c1
$ lxc launch ubuntu:20.04 c2   
Creating c2
Starting c2
$ vi /etc/resolv.conf          
$ cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 8.8.8.8
$ lxc delete -f c1
$ cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 1.1.1.1
nameserver 9.9.9.9
randombenj commented 2 years ago

One strange thing is that, it doesn't happen with vms:

$ lxc ls
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
$ lxc launch ubuntu:20.04 --vm v1
Creating v1
Starting v1                                   
$ lxc launch ubuntu:20.04 --vm v2
Creating v2
Starting v2
$ vi /etc/resolv.conf
$ cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 8.8.8.8
$ lxc ls
+------+---------+-------------------------+-------------------------------------------------+-----------------+-----------+
| NAME |  STATE  |          IPV4           |                      IPV6                       |      TYPE       | SNAPSHOTS |
+------+---------+-------------------------+-------------------------------------------------+-----------------+-----------+
| v1   | RUNNING | 10.243.201.214 (enp5s0) | fd42:74bf:7b0f:f323:216:3eff:fe5c:da1b (enp5s0) | VIRTUAL-MACHINE | 0         |
+------+---------+-------------------------+-------------------------------------------------+-----------------+-----------+
| v2   | RUNNING | 10.243.201.73 (enp5s0)  | fd42:74bf:7b0f:f323:216:3eff:fe46:6fbb (enp5s0) | VIRTUAL-MACHINE | 0         |
+------+---------+-------------------------+-------------------------------------------------+-----------------+-----------+
$ lxc rm -f v1
$ cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 8.8.8.8
stgraber commented 2 years ago

Can you check /var/log/syslog? It should show what NetworkManager is doing.

I suspect it's NM noticing instances appearing and disappearing and just regenerating resolv.conf. Worth noting that NM will also do that in the background every so often (DHCP lease renewal).

In general the issue here is with your VPN client thinking that it can alter resolv.conf when another process is in charge of it...

Ideally you'd want a NM VPN plugin for Pulse so that the integration works properly or at least have Pulse tell NM what DNS config changes it wants instead of doing them itself.

randombenj commented 2 years ago

Yeah you are probably right! There is a lot of NM activity:

(this is for lxc delete -f INSTANCE)

Dec 02 15:03:28 rrouwprlc0011 systemd[10395]: Started snap.lxd.lxc.0c9b9a97-8b8f-4c0f-b0db-c8c64ce9b8af.scope.
Dec 02 15:03:28 rrouwprlc0011 kernel: phys9E52r4: renamed from eth0
Dec 02 15:03:28 rrouwprlc0011 systemd-networkd[1883]: eth0: Interface name change detected, eth0 has been renamed to phys9E52r4.
Dec 02 15:03:28 rrouwprlc0011 systemd-networkd[1883]: veth9e972a1b: Lost carrier
Dec 02 15:03:28 rrouwprlc0011 networkd-dispatcher[1915]: WARNING:Unknown index 89 seen, reloading interface list
Dec 02 15:03:28 rrouwprlc0011 kernel: lxdbr0: port 1(veth9e972a1b) entered disabled state
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.6777] manager: (eth0): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.6821] device (eth0): interface index 89 renamed iface from 'eth0' to 'phys9E52r4'
Dec 02 15:03:28 rrouwprlc0011 kernel: vetha75f0ae6: renamed from phys9E52r4
Dec 02 15:03:28 rrouwprlc0011 systemd-networkd[1883]: phys9E52r4: Interface name change detected, phys9E52r4 has been rena/med to vetha75f0ae6.
Dec 02 15:03:28 rrouwprlc0011 systemd-udevd[201999]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.7158] device (phys9E52r4): interface index 89 renamed iface from 'phys9E52r4' to 'vetha75f0ae6'
Dec 02 15:03:28 rrouwprlc0011 systemd-udevd[201999]: ethtool: could not get ethtool features for eth0
Dec 02 15:03:28 rrouwprlc0011 systemd-udevd[201999]: Could not set offload features of eth0: No such device
Dec 02 15:03:28 rrouwprlc0011 networkd-dispatcher[1915]: WARNING:Unknown index 89 seen, reloading interface list
Dec 02 15:03:28 rrouwprlc0011 networkd-dispatcher[1915]: WARNING:Unknown index 89 seen, reloading interface list
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.7505] device (vetha75f0ae6): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external')
Dec 02 15:03:28 rrouwprlc0011 systemd-networkd[1883]: vetha75f0ae6: Link UP
Dec 02 15:03:28 rrouwprlc0011 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vetha75f0ae6: link becomes ready
Dec 02 15:03:28 rrouwprlc0011 kernel: lxdbr0: port 1(veth9e972a1b) entered blocking state
Dec 02 15:03:28 rrouwprlc0011 kernel: lxdbr0: port 1(veth9e972a1b) entered forwarding state
Dec 02 15:03:28 rrouwprlc0011 systemd-networkd[1883]: vetha75f0ae6: Gained carrier
Dec 02 15:03:28 rrouwprlc0011 systemd-networkd[1883]: veth9e972a1b: Gained carrier
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.7573] device (vetha75f0ae6): carrier: link connected
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.7648] settings: (vetha75f0ae6): created default wired connection 'Wired connection 3'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <warn>  [1638453808.7665] device (vetha75f0ae6): connectivity: "/proc/sys/net/ipv4/conf/vetha75f0ae6/rp_filter" is set to "1". This might break connectivity checking for IPv4 on this device
Dec 02 15:03:28 rrouwprlc0011 systemd-udevd[201999]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.7704] device (veth9e972a1b): carrier: link connected
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.7708] device (vetha75f0ae6): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'managed')
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.7742] policy: auto-activating connection 'Wired connection 3' (bf995c19-401c-3cca-be85-e8cc39a61979)
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.7756] device (vetha75f0ae6): Activation: starting connection 'Wired connection 3' (bf995c19-401c-3cca-be85-e8cc39a61979)
Dec 02 15:03:28 rrouwprlc0011 systemd-udevd[201999]: ethtool: could not get ethtool features for eth0
Dec 02 15:03:28 rrouwprlc0011 systemd-udevd[201999]: Could not set offload features of eth0: No such device
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.7893] device (vetha75f0ae6): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
Dec 02 15:03:28 rrouwprlc0011 systemd-udevd[201999]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.7902] device (vetha75f0ae6): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.7907] device (vetha75f0ae6): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.7911] dhcp4 (vetha75f0ae6): activation: beginning transaction (timeout in 45 seconds)
Dec 02 15:03:28 rrouwprlc0011 systemd-udevd[201999]: ethtool: could not get ethtool features for phys9E52r4
Dec 02 15:03:28 rrouwprlc0011 systemd-udevd[201999]: Could not set offload features of phys9E52r4: No such device
Dec 02 15:03:28 rrouwprlc0011 systemd-udevd[201999]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 02 15:03:28 rrouwprlc0011 systemd-udevd[201999]: Using default interface naming scheme 'v245'.
Dec 02 15:03:28 rrouwprlc0011 kernel: IPv4: martian source 10.243.201.223 from 10.243.201.1, on dev vetha75f0ae6
Dec 02 15:03:28 rrouwprlc0011 kernel: ll header: 00000000: 00 16 3e 00 14 1f 00 16 3e 38 74 36 08 00
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8059] dhcp4 (vetha75f0ae6): option dhcp_lease_time      => '3600'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8060] dhcp4 (vetha75f0ae6): option domain_name          => 'lxd'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8060] dhcp4 (vetha75f0ae6): option domain_name_servers  => '10.243.201.1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8060] dhcp4 (vetha75f0ae6): option expiry               => '1638457408'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8060] dhcp4 (vetha75f0ae6): option host_name            => 'c1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8060] dhcp4 (vetha75f0ae6): option ip_address           => '10.243.201.223'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8060] dhcp4 (vetha75f0ae6): option next_server          => '10.243.201.1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8060] dhcp4 (vetha75f0ae6): option requested_broadcast_address => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8060] dhcp4 (vetha75f0ae6): option requested_domain_name => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8060] dhcp4 (vetha75f0ae6): option requested_domain_name_servers => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8060] dhcp4 (vetha75f0ae6): option requested_domain_search => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8060] dhcp4 (vetha75f0ae6): option requested_host_name  => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8060] dhcp4 (vetha75f0ae6): option requested_interface_mtu => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8061] dhcp4 (vetha75f0ae6): option requested_ms_classless_static_routes => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8061] dhcp4 (vetha75f0ae6): option requested_nis_domain => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8061] dhcp4 (vetha75f0ae6): option requested_nis_servers => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8061] dhcp4 (vetha75f0ae6): option requested_ntp_servers => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8061] dhcp4 (vetha75f0ae6): option requested_rfc3442_classless_static_routes => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8061] dhcp4 (vetha75f0ae6): option requested_root_path  => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8061] dhcp4 (vetha75f0ae6): option requested_routers    => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8061] dhcp4 (vetha75f0ae6): option requested_static_routes => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8061] dhcp4 (vetha75f0ae6): option requested_subnet_mask => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8061] dhcp4 (vetha75f0ae6): option requested_time_offset => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8061] dhcp4 (vetha75f0ae6): option requested_wpad       => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8061] dhcp4 (vetha75f0ae6): option routers              => '10.243.201.1'
Dec 02 15:03:28 rrouwprlc0011 kernel: IPv4: martian source 10.243.201.223 from 10.243.201.1, on dev vetha75f0ae6
Dec 02 15:03:28 rrouwprlc0011 kernel: ll header: 00000000: 00 16 3e 00 14 1f 00 16 3e 38 74 36 08 00
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8061] dhcp4 (vetha75f0ae6): option subnet_mask          => '255.255.255.0'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8061] dhcp4 (vetha75f0ae6): state changed unknown -> bound
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8089] device (vetha75f0ae6): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed')
Dec 02 15:03:28 rrouwprlc0011 kernel: device veth9e972a1b left promiscuous mode
Dec 02 15:03:28 rrouwprlc0011 kernel: lxdbr0: port 1(veth9e972a1b) entered disabled state
Dec 02 15:03:28 rrouwprlc0011 dbus-daemon[1905]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.571' (uid=0 pid=114313 comm="/usr/sbin/NetworkManager --no-daemon " label="unconfined")
Dec 02 15:03:28 rrouwprlc0011 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 02 15:03:28 rrouwprlc0011 dbus-daemon[1905]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'
Dec 02 15:03:28 rrouwprlc0011 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8222] device (vetha75f0ae6): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed')
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8225] device (vetha75f0ae6): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')
Dec 02 15:03:28 rrouwprlc0011 dnsmasq[9704]: reading /etc/resolv.conf
Dec 02 15:03:28 rrouwprlc0011 dnsmasq[9948]: reading /etc/resolv.conf
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info>  [1638453808.8301] device (vetha75f0ae6): Activation: successful, device activated.
Dec 02 15:03:28 rrouwprlc0011 dnsmasq[9948]: using local addresses only for domain lxd
Dec 02 15:03:28 rrouwprlc0011 dnsmasq[9704]: using local addresses only for domain lxd
Dec 02 15:03:28 rrouwprlc0011 dnsmasq[9948]: using nameserver 1.1.1.1#53
Dec 02 15:03:28 rrouwprlc0011 dnsmasq[9704]: using nameserver 1.1.1.1#53
Dec 02 15:03:28 rrouwprlc0011 dnsmasq[9704]: using nameserver 9.9.9.9#53
Dec 02 15:03:28 rrouwprlc0011 dnsmasq[9948]: using nameserver 9.9.9.9#53

1.1.1.1 and 9.9.9.9 are already the new servers, I will look how to write a NM plugin then, thanks for your help!

As always high quality help regarding lxd from you guys :heart: Highly appreciated! :tada: