Closed jamesbeedy closed 2 years ago
Can you check that:
And then give us the output of:
This seems to be using LXD 4.0.9 which hasn't been updated in several weeks so if this used to work some days ago, then it's very unlikely to be caused by anything we did :)
ubuntu@keen-mayfly:~$ sudo snap refresh lxd --channel latest/stable Download snap "lxd" (22710) from channel "latest/stable" 83% 7.78MB/s 1.34sDownload snap lxd 4.24 from Canonical✓ refreshed
Looks like the OP refreshed to LXD 4.24.
Most likely this is a firewall issue. These can occur suddenly if you have an ordering issue between your system firewall and the one LXD sets up.
Please show the output of:
lxc config show <instance> --expanded
.ip a
and ip r
on the LXD host and inside the container.sudo nft list ruleset
and sudo iptables-save
(if you don't have nft installed do sudo apt install nftables
first).Oh right, I missed the refresh.
$ lxc config show u5 --expanded
architecture: x86_64
config:
image.architecture: amd64
image.description: ubuntu 20.04 LTS amd64 (release) (20220321)
image.label: release
image.os: ubuntu
image.release: focal
image.serial: "20220321"
image.type: squashfs
image.version: "20.04"
volatile.base_image: 9a04aa57d48d12a3a82eb71587eeef726924c3088a84a3acc62d84f02c11f32e
volatile.eth0.host_name: veth5b0d419b
volatile.eth0.hwaddr: 00:16:3e:59:53:d4
volatile.idmap.base: "0"
volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.power: RUNNING
volatile.uuid: 56b21b5b-04ac-4269-a406-bbbc9fbb1133
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
ubuntu@keen-mayfly:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:12:d3:a3 brd ff:ff:ff:ff:ff:ff
inet 10.104.196.120/23 brd 10.104.197.255 scope global eth7
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe12:d3a3/64 scope link
valid_lft forever preferred_lft forever
3: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:27:7e:00 brd ff:ff:ff:ff:ff:ff
inet 10.14.99.1/24 scope global lxdbr0
valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:fe27:7e00/64 scope link
valid_lft forever preferred_lft forever
5: veth66c0b3a0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether aa:71:95:7a:2d:3c brd ff:ff:ff:ff:ff:ff link-netnsid 0
8: veth5b0d419b@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether ba:bc:38:22:48:16 brd ff:ff:ff:ff:ff:ff link-netnsid 1
$ lxc shell u5
root@u5:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:59:53:d4 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.14.99.164/24 brd 10.14.99.255 scope global dynamic eth0
valid_lft 2989sec preferred_lft 2989sec
inet6 fe80::216:3eff:fe59:53d4/64 scope link
valid_lft forever preferred_lft forever
ubuntu@keen-mayfly:~$ sudo nft list ruleset
table inet lxd {
chain pstrt.lxdbr0 {
type nat hook postrouting priority srcnat; policy accept;
@nh,96,24 659043 @nh,128,24 != 659043 masquerade
}
chain fwd.lxdbr0 {
type filter hook forward priority filter; policy accept;
ip version 4 oifname "lxdbr0" accept
ip version 4 iifname "lxdbr0" accept
}
chain in.lxdbr0 {
type filter hook input priority filter; policy accept;
iifname "lxdbr0" tcp dport 53 accept
iifname "lxdbr0" udp dport 53 accept
iifname "lxdbr0" icmp type { destination-unreachable, time-exceeded, parameter-problem } accept
iifname "lxdbr0" udp dport 67 accept
}
chain out.lxdbr0 {
type filter hook output priority filter; policy accept;
oifname "lxdbr0" tcp sport 53 accept
oifname "lxdbr0" udp sport 53 accept
oifname "lxdbr0" icmp type { destination-unreachable, time-exceeded, parameter-problem } accept
oifname "lxdbr0" udp sport 67 accept
}
}
$ sudo iptables-save
# Generated by iptables-save v1.8.4 on Wed Mar 23 22:58:42 2022
*raw
:PREROUTING ACCEPT [325446:1196590277]
:OUTPUT ACCEPT [189286:10633456]
COMMIT
# Completed on Wed Mar 23 22:58:42 2022
# Generated by iptables-save v1.8.4 on Wed Mar 23 22:58:42 2022
*mangle
:PREROUTING ACCEPT [325446:1196590277]
:INPUT ACCEPT [301092:1149692953]
:FORWARD ACCEPT [21336:46097554]
:OUTPUT ACCEPT [189288:10633832]
:POSTROUTING ACCEPT [210624:56731386]
COMMIT
# Completed on Wed Mar 23 22:58:42 2022
# Generated by iptables-save v1.8.4 on Wed Mar 23 22:58:42 2022
*nat
:PREROUTING ACCEPT [3389:886619]
:INPUT ACCEPT [324:85006]
:OUTPUT ACCEPT [347:26057]
:POSTROUTING ACCEPT [346:26017]
COMMIT
# Completed on Wed Mar 23 22:58:42 2022
# Generated by iptables-save v1.8.4 on Wed Mar 23 22:58:42 2022
*filter
:INPUT ACCEPT [301092:1149692953]
:FORWARD ACCEPT [21336:46097554]
:OUTPUT ACCEPT [189293:10634784]
COMMIT
# Completed on Wed Mar 23 22:58:42 2022
$ sudo iptables -L -n -v
Chain INPUT (policy ACCEPT 301K packets, 1150M bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 21336 packets, 46M bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 189K packets, 11M bytes)
pkts bytes target prot opt in out source destination
Not seeing anything obviously wrong there. Can you do:
Can we see 'lxc network show lxdbr0' too please
Also you didn't provide the requested output of 'ip r' from host and container
@stgraber I have a similar problem. The instance is randomly disconnected after rebooting the host. Whenever something like this happens, I need to manually lxd shutdown and sudo lxd to restart the LXD service. Then instance can access the WAN and host's LAN again. This problem only affects the instance's access to the WAN and host's LAN, and does not affect the host's access to the instance through the proxy device. This problem occurs in containers that are automatically started after reboot the host, and the probability of occurrence is 20%. The instance uses LxC's default network settings without special changes.
System is ubuntu20.04. LXD version is 4.23.
disconnected
Unfortunately "disconnected" comes in many ways, so without being able to narrow down what form of disconnection is occuring in both cases it is not possible to resolve it.
Next time it happens please can you gather the diagnostics output requested in this thread to see if we can narrow it down.
Thanks
All of the sudden my lxd instances can't talk to the WAN. I initially thought this had something to do with modifications I have made to my local routing table, but after troubleshooting I realize that is not the case.
Required information
certificate_fingerprint: *** driver: lxc driver_version: 4.0.12 firewall: nftables kernel: Linux kernel_architecture: x86_64 kernel_features: idmapped_mounts: "false" netnsid_getifaddrs: "true" seccomp_listener: "true" seccomp_listener_continue: "true" shiftfs: "false" uevent_injection: "true" unpriv_fscaps: "true" kernel_version: 5.4.0-105-generic lxc_features: cgroup2: "true" core_scheduling: "true" devpts_fd: "true" idmapped_mounts_v2: "true" mount_injection_file: "true" network_gateway_device_route: "true" network_ipvlan: "true" network_l2proxy: "true" network_phys_macvlan_mtu: "true" network_veth_router: "true" pidfd: "true" seccomp_allow_deny_syntax: "true" seccomp_notify: "true" seccomp_proxy_send_notify_fd: "true" os_name: Ubuntu os_version: "20.04" project: default server: lxd server_clustered: false server_event_mode: full-mesh server_name: keen-mayfly server_pid: 5700 server_version: "4.24" storage: zfs storage_version: 0.8.3-1ubuntu12.13 storage_supported_drivers:
Issue description
LXD containers no longer have outbound networking.
Just the other day, my lxc instances communicated to the WAN just fine. Today, I try to launch an instance and it can't communicate with anything on the internet.
I have validated this on my local dev system, coworkers systems and fresh new 20.04 machines.
Steps to reproduce
1) Install a fresh ubuntu 20.04 machine 2) refresh the lxd snap to
latest/stable
3) runsudo lxd init
4) Launch an ubuntu:20.04 instance, exec in and try to ping google - this will faildmesg output
https://paste.ubuntu.com/p/D7dbXDtPf6/
Please let me know if you need anything else.
Thanks