Closed randombenj closed 2 years ago
What is X.X.X.X and Y.Y.Y.Y in your example?
But no, LXD does not modify /etc/resolv.conf to my knowledge.
@tomponline That was fast :tada:
It's just company dns addresses, doesn't really matter, it gets generated by pulsesecure. What does matter is that it's different after running lxc delete INSTANCE
(doesn't happen with vms apparently).
Most likely pulsesecure is modifying your DNS as it detects the container's host side interface being created. This sounds like a pulsesecure config/feature issue.
I don't know if this is the propper way to test it:
lxc info asdf
Name: asdf
Status: RUNNING
Type: container
Architecture: x86_64
PID: 46746
Created: 2021/12/02 09:44 CET
Last Used: 2021/12/02 09:44 CET
Resources:
Processes: 51
Disk usage:
root: 8.24MiB
CPU usage:
CPU usage (in seconds): 15
Memory usage:
Memory (current): 214.61MiB
Memory (peak): 256.79MiB
Network usage:
eth0:
Type: broadcast
State: UP
Host interface: vetha5c1c6bb
MAC address: 00:16:3e:f4:7e:b8
MTU: 1500
Bytes received: 23.54kB
Bytes sent: 11.52kB
Packets received: 65
Packets sent: 72
IP addresses:
inet: 10.243.201.30/24 (global)
inet6: fd42:74bf:7b0f:f323:216:3eff:fef4:7eb8/64 (global)
inet6: fe80::216:3eff:fef4:7eb8/64 (link)
lo:
Type: loopback
State: UP
MTU: 65536
Bytes received: 1.44kB
Bytes sent: 1.44kB
Packets received: 16
Packets sent: 16
IP addresses:
inet: 127.0.0.1/8 (local)
inet6: ::1/128 (local)
sudo ip link delete vetha5c1c6bb
lxc info asdf
Name: asdf
Status: RUNNING
Type: container
Architecture: x86_64
PID: 46746
Created: 2021/12/02 09:44 CET
Last Used: 2021/12/02 09:44 CET
Resources:
Processes: 52
Disk usage:
root: 8.24MiB
CPU usage:
CPU usage (in seconds): 15
Memory usage:
Memory (current): 215.76MiB
Memory (peak): 256.79MiB
Network usage:
lo:
Type: loopback
State: UP
MTU: 65536
Bytes received: 1.44kB
Bytes sent: 1.44kB
Packets received: 16
Packets sent: 16
IP addresses:
inet: 127.0.0.1/8 (local)
inet6: ::1/128 (local)
Does not change the /etc/resolv.conf
edit: But your assumption that it might be a pulsesecure issue is also very likely
Does the VPN normally modify the DNS when it starts/connects? What process set it to 8.8.8.8?
I'm thinking that when the container starts, it creates a veth pair between the host and the container, which the system will see as a new interface being added and perhaps thats triggering the VPN client to re-apply its DNS settings.
Does the VPN normally modify the DNS when it starts/connects? What process set it to 8.8.8.8?
Yes the VPN changes the /etc/resolv.conf
settings when it starts. I tryed strace on the lxc client wich does not touch the file, but didn't check the daemon.
The strange thing is, creating a container does not change the DNS config.
I now disabled everything pulse related (client and service). Would you mind trying to reproduce this:
$ lxc launch ubuntu:20.04 hello-from-the-otter-slide
# change your /etc/resolv.conf file to a different DNS
$ cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 8.8.8.8
$ lxc delete -f hello-from-the-otter-slide
$ cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 1.1.1.1
nameserver 9.9.9.9
LXD doesn't modify global DNS settings.
Please show output of lxc config show <instance> --expanded
.
If you're using lxdbr0 with only one container, when it stops the lxdbr0 bridge interface will go down, potentially trigger your VPN client to reconfigure the global settings.
$ lxc launch ubuntu:20.04 hello-from-the-otter-slide [1]
Creating hello-from-the-otter-slide
Starting hello-from-the-otter-slide
$ lxc config show hello-from-the-otter-slide --expanded
architecture: x86_64
config:
image.architecture: amd64
image.description: ubuntu 20.04 LTS amd64 (release) (20211129)
image.label: release
image.os: ubuntu
image.release: focal
image.serial: "20211129"
image.type: squashfs
image.version: "20.04"
volatile.base_image: a8402324842148ccfcbacbc69bf251baa9703916593089f0609e8d45e3185bff
volatile.eth0.host_name: veth8cabfb1c
volatile.eth0.hwaddr: 00:16:3e:21:51:09
volatile.idmap.base: "0"
volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.power: RUNNING
volatile.uuid: a481246d-d7da-495c-b802-9d796491882a
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
ephemeral: false
profiles:
- default
stateful: false
description: "
Do you have just a single container running/stopping connected to lxdbr0?
Does the issue still occur if you have two containers running, and then stop just one of them (thus leaving lxdbr0 up)?
Also, to help narrow down the issue, does it also happen with lxc stop -f <instance>
as opposed to lxc delete -f <instance>
?
I just tryed to do the same thing in a nested ubuntu:20.04 container and get the same behaviour (with systemd managed DNS):
root@blessed-puma:~# sudo vi /etc/resolv.conf
root@blessed-puma:~# cat /etc/resolv.conf
nameserver 8.8.8.8
root@blessed-puma:~# lxc delete -f asdf
root@blessed-puma:~# cat /etc/resolv.conf
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0 trust-ad
search lxd
root@blessed-puma:~#
Also, to help narrow down the issue, does it also happen with lxc stop -f
as opposed to lxc delete -f ?
Yes the same happens also when only stopping the container
You didnt answer my question though. https://github.com/lxc/lxd/issues/9610#issuecomment-984430831
Oh sorry missed that :)
Do you have just a single container running/stopping connected to lxdbr0?
No there are a few connected to lxdbr0:
$ lxc network show lxdbr0
config:
ipv4.address: 10.243.201.1/24
ipv4.nat: "true"
ipv6.address: fd42:74bf:7b0f:f323::1/64
ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/advanced-parakeet
- /1.0/instances/asdf
- /1.0/instances/blessed-puma
- /1.0/instances/buster
- /1.0/instances/docker
- /1.0/instances/hello-from-the-otter-slide
- /1.0/instances/lxd-00a7f780-f54f-4bfd-adcb-a6626bbd0b51
- /1.0/instances/lxd-27c10684-7d23-440a-86fc-0c87b5cd7a84
- /1.0/instances/lxd-8e3e7e26-dbf9-49b3-8046-daa508ad525d
- /1.0/instances/minikube
- /1.0/instances/podman
- /1.0/profiles/default
managed: true
status: Created
locations:
- none
Does the issue still occur if you have two containers running, and then stop just one of them (thus leaving lxdbr0 up)?
Yes this still happens (I deleted all the running containers/vms from before):
$ lxc ls
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
$ lxc launch ubuntu:20.04 c1
Creating c1
Starting c1
$ lxc launch ubuntu:20.04 c2
Creating c2
Starting c2
$ vi /etc/resolv.conf
$ cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 8.8.8.8
$ lxc delete -f c1
$ cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 1.1.1.1
nameserver 9.9.9.9
One strange thing is that, it doesn't happen with vms:
$ lxc ls
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
$ lxc launch ubuntu:20.04 --vm v1
Creating v1
Starting v1
$ lxc launch ubuntu:20.04 --vm v2
Creating v2
Starting v2
$ vi /etc/resolv.conf
$ cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 8.8.8.8
$ lxc ls
+------+---------+-------------------------+-------------------------------------------------+-----------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+---------+-------------------------+-------------------------------------------------+-----------------+-----------+
| v1 | RUNNING | 10.243.201.214 (enp5s0) | fd42:74bf:7b0f:f323:216:3eff:fe5c:da1b (enp5s0) | VIRTUAL-MACHINE | 0 |
+------+---------+-------------------------+-------------------------------------------------+-----------------+-----------+
| v2 | RUNNING | 10.243.201.73 (enp5s0) | fd42:74bf:7b0f:f323:216:3eff:fe46:6fbb (enp5s0) | VIRTUAL-MACHINE | 0 |
+------+---------+-------------------------+-------------------------------------------------+-----------------+-----------+
$ lxc rm -f v1
$ cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 8.8.8.8
Can you check /var/log/syslog? It should show what NetworkManager is doing.
I suspect it's NM noticing instances appearing and disappearing and just regenerating resolv.conf. Worth noting that NM will also do that in the background every so often (DHCP lease renewal).
In general the issue here is with your VPN client thinking that it can alter resolv.conf when another process is in charge of it...
Ideally you'd want a NM VPN plugin for Pulse so that the integration works properly or at least have Pulse tell NM what DNS config changes it wants instead of doing them itself.
Yeah you are probably right! There is a lot of NM activity:
(this is for lxc delete -f INSTANCE
)
Dec 02 15:03:28 rrouwprlc0011 systemd[10395]: Started snap.lxd.lxc.0c9b9a97-8b8f-4c0f-b0db-c8c64ce9b8af.scope.
Dec 02 15:03:28 rrouwprlc0011 kernel: phys9E52r4: renamed from eth0
Dec 02 15:03:28 rrouwprlc0011 systemd-networkd[1883]: eth0: Interface name change detected, eth0 has been renamed to phys9E52r4.
Dec 02 15:03:28 rrouwprlc0011 systemd-networkd[1883]: veth9e972a1b: Lost carrier
Dec 02 15:03:28 rrouwprlc0011 networkd-dispatcher[1915]: WARNING:Unknown index 89 seen, reloading interface list
Dec 02 15:03:28 rrouwprlc0011 kernel: lxdbr0: port 1(veth9e972a1b) entered disabled state
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.6777] manager: (eth0): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.6821] device (eth0): interface index 89 renamed iface from 'eth0' to 'phys9E52r4'
Dec 02 15:03:28 rrouwprlc0011 kernel: vetha75f0ae6: renamed from phys9E52r4
Dec 02 15:03:28 rrouwprlc0011 systemd-networkd[1883]: phys9E52r4: Interface name change detected, phys9E52r4 has been rena/med to vetha75f0ae6.
Dec 02 15:03:28 rrouwprlc0011 systemd-udevd[201999]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.7158] device (phys9E52r4): interface index 89 renamed iface from 'phys9E52r4' to 'vetha75f0ae6'
Dec 02 15:03:28 rrouwprlc0011 systemd-udevd[201999]: ethtool: could not get ethtool features for eth0
Dec 02 15:03:28 rrouwprlc0011 systemd-udevd[201999]: Could not set offload features of eth0: No such device
Dec 02 15:03:28 rrouwprlc0011 networkd-dispatcher[1915]: WARNING:Unknown index 89 seen, reloading interface list
Dec 02 15:03:28 rrouwprlc0011 networkd-dispatcher[1915]: WARNING:Unknown index 89 seen, reloading interface list
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.7505] device (vetha75f0ae6): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external')
Dec 02 15:03:28 rrouwprlc0011 systemd-networkd[1883]: vetha75f0ae6: Link UP
Dec 02 15:03:28 rrouwprlc0011 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vetha75f0ae6: link becomes ready
Dec 02 15:03:28 rrouwprlc0011 kernel: lxdbr0: port 1(veth9e972a1b) entered blocking state
Dec 02 15:03:28 rrouwprlc0011 kernel: lxdbr0: port 1(veth9e972a1b) entered forwarding state
Dec 02 15:03:28 rrouwprlc0011 systemd-networkd[1883]: vetha75f0ae6: Gained carrier
Dec 02 15:03:28 rrouwprlc0011 systemd-networkd[1883]: veth9e972a1b: Gained carrier
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.7573] device (vetha75f0ae6): carrier: link connected
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.7648] settings: (vetha75f0ae6): created default wired connection 'Wired connection 3'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <warn> [1638453808.7665] device (vetha75f0ae6): connectivity: "/proc/sys/net/ipv4/conf/vetha75f0ae6/rp_filter" is set to "1". This might break connectivity checking for IPv4 on this device
Dec 02 15:03:28 rrouwprlc0011 systemd-udevd[201999]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.7704] device (veth9e972a1b): carrier: link connected
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.7708] device (vetha75f0ae6): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'managed')
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.7742] policy: auto-activating connection 'Wired connection 3' (bf995c19-401c-3cca-be85-e8cc39a61979)
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.7756] device (vetha75f0ae6): Activation: starting connection 'Wired connection 3' (bf995c19-401c-3cca-be85-e8cc39a61979)
Dec 02 15:03:28 rrouwprlc0011 systemd-udevd[201999]: ethtool: could not get ethtool features for eth0
Dec 02 15:03:28 rrouwprlc0011 systemd-udevd[201999]: Could not set offload features of eth0: No such device
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.7893] device (vetha75f0ae6): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
Dec 02 15:03:28 rrouwprlc0011 systemd-udevd[201999]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.7902] device (vetha75f0ae6): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.7907] device (vetha75f0ae6): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.7911] dhcp4 (vetha75f0ae6): activation: beginning transaction (timeout in 45 seconds)
Dec 02 15:03:28 rrouwprlc0011 systemd-udevd[201999]: ethtool: could not get ethtool features for phys9E52r4
Dec 02 15:03:28 rrouwprlc0011 systemd-udevd[201999]: Could not set offload features of phys9E52r4: No such device
Dec 02 15:03:28 rrouwprlc0011 systemd-udevd[201999]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 02 15:03:28 rrouwprlc0011 systemd-udevd[201999]: Using default interface naming scheme 'v245'.
Dec 02 15:03:28 rrouwprlc0011 kernel: IPv4: martian source 10.243.201.223 from 10.243.201.1, on dev vetha75f0ae6
Dec 02 15:03:28 rrouwprlc0011 kernel: ll header: 00000000: 00 16 3e 00 14 1f 00 16 3e 38 74 36 08 00
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8059] dhcp4 (vetha75f0ae6): option dhcp_lease_time => '3600'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8060] dhcp4 (vetha75f0ae6): option domain_name => 'lxd'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8060] dhcp4 (vetha75f0ae6): option domain_name_servers => '10.243.201.1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8060] dhcp4 (vetha75f0ae6): option expiry => '1638457408'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8060] dhcp4 (vetha75f0ae6): option host_name => 'c1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8060] dhcp4 (vetha75f0ae6): option ip_address => '10.243.201.223'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8060] dhcp4 (vetha75f0ae6): option next_server => '10.243.201.1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8060] dhcp4 (vetha75f0ae6): option requested_broadcast_address => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8060] dhcp4 (vetha75f0ae6): option requested_domain_name => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8060] dhcp4 (vetha75f0ae6): option requested_domain_name_servers => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8060] dhcp4 (vetha75f0ae6): option requested_domain_search => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8060] dhcp4 (vetha75f0ae6): option requested_host_name => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8060] dhcp4 (vetha75f0ae6): option requested_interface_mtu => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8061] dhcp4 (vetha75f0ae6): option requested_ms_classless_static_routes => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8061] dhcp4 (vetha75f0ae6): option requested_nis_domain => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8061] dhcp4 (vetha75f0ae6): option requested_nis_servers => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8061] dhcp4 (vetha75f0ae6): option requested_ntp_servers => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8061] dhcp4 (vetha75f0ae6): option requested_rfc3442_classless_static_routes => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8061] dhcp4 (vetha75f0ae6): option requested_root_path => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8061] dhcp4 (vetha75f0ae6): option requested_routers => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8061] dhcp4 (vetha75f0ae6): option requested_static_routes => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8061] dhcp4 (vetha75f0ae6): option requested_subnet_mask => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8061] dhcp4 (vetha75f0ae6): option requested_time_offset => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8061] dhcp4 (vetha75f0ae6): option requested_wpad => '1'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8061] dhcp4 (vetha75f0ae6): option routers => '10.243.201.1'
Dec 02 15:03:28 rrouwprlc0011 kernel: IPv4: martian source 10.243.201.223 from 10.243.201.1, on dev vetha75f0ae6
Dec 02 15:03:28 rrouwprlc0011 kernel: ll header: 00000000: 00 16 3e 00 14 1f 00 16 3e 38 74 36 08 00
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8061] dhcp4 (vetha75f0ae6): option subnet_mask => '255.255.255.0'
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8061] dhcp4 (vetha75f0ae6): state changed unknown -> bound
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8089] device (vetha75f0ae6): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed')
Dec 02 15:03:28 rrouwprlc0011 kernel: device veth9e972a1b left promiscuous mode
Dec 02 15:03:28 rrouwprlc0011 kernel: lxdbr0: port 1(veth9e972a1b) entered disabled state
Dec 02 15:03:28 rrouwprlc0011 dbus-daemon[1905]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.571' (uid=0 pid=114313 comm="/usr/sbin/NetworkManager --no-daemon " label="unconfined")
Dec 02 15:03:28 rrouwprlc0011 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 02 15:03:28 rrouwprlc0011 dbus-daemon[1905]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'
Dec 02 15:03:28 rrouwprlc0011 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8222] device (vetha75f0ae6): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed')
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8225] device (vetha75f0ae6): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')
Dec 02 15:03:28 rrouwprlc0011 dnsmasq[9704]: reading /etc/resolv.conf
Dec 02 15:03:28 rrouwprlc0011 dnsmasq[9948]: reading /etc/resolv.conf
Dec 02 15:03:28 rrouwprlc0011 NetworkManager[114313]: <info> [1638453808.8301] device (vetha75f0ae6): Activation: successful, device activated.
Dec 02 15:03:28 rrouwprlc0011 dnsmasq[9948]: using local addresses only for domain lxd
Dec 02 15:03:28 rrouwprlc0011 dnsmasq[9704]: using local addresses only for domain lxd
Dec 02 15:03:28 rrouwprlc0011 dnsmasq[9948]: using nameserver 1.1.1.1#53
Dec 02 15:03:28 rrouwprlc0011 dnsmasq[9704]: using nameserver 1.1.1.1#53
Dec 02 15:03:28 rrouwprlc0011 dnsmasq[9704]: using nameserver 9.9.9.9#53
Dec 02 15:03:28 rrouwprlc0011 dnsmasq[9948]: using nameserver 9.9.9.9#53
1.1.1.1 and 9.9.9.9 are already the new servers, I will look how to write a NM plugin then, thanks for your help!
As always high quality help regarding lxd from you guys :heart: Highly appreciated! :tada:
Required information
Issue description
When (having to) use pulsesecure (vpn client) it alters the
/etc/resolv.conf
file. However when runninglxc delte INSTANCE
lxd also seems to alter the/etc/resolv.conf
file. Is this intended behavour? Here is what happens:Steps to reproduce
see above ^
Information to attach
dmesg
)lxc info NAME --show-log
)lxc config show NAME --expanded
)lxc monitor
while reproducing the issue)