Closed lfarkas closed 1 year ago
another strange log about dns:
2023-06-12T17:27:10+02:00 DEBG client/internal/dns/host_linux.go:34: discovered mode is: 1
2023-06-12T17:27:10+02:00 WARN client/internal/dns/server.go:174: binding dns on 100.76.114.54:53 is not available, error: listen udp 100.76.114.54:53: bind: cannot assign requested address
2023-06-12T17:27:10+02:00 DEBG client/internal/routemanager/firewall_linux.go:49: iptables is not supported, using nftables
Hi @lfarkas, thanks for reporting the issues.
About the second one, with logs for binding DNS, this is common and should probably be using a better logging message. The agent tried to listen to your netbird Ip address, but couldn't, as the address took a bit longer to become ready.
Regarding the first issue, with new capabilities required, since v0.20.0 we've introduced BPF and raw sockets in order to increase the ratio of direct connections. In our tests, we didn't run into this requirement from our docker hosts, but we will review them and update the docs with more precise requirements.
Regarding your routing issues, the agent tests for iptables and if is not available (usual case for container) it will use nftables. It also tries to enable ip forwarding by writing to /proc/sys/net/ipv4/ip_forward.
Can you share which container image you are using?
it's a homeassistant addon. you can simply install it from https://www.home-assistant.io/installation/. may be the easiest on a virtual machine. the container build by home assistant form an addon repo during addon installation. you can use my repo: https://github.com/lfarkas/addon-netbird the Dockerfile is very simple: https://github.com/lfarkas/addon-netbird/blob/main/netbird/Dockerfile while this file describe the docker running enviroment: https://github.com/lfarkas/addon-netbird/blob/main/netbird/config.yaml but if you tell me how to test what to see or even a debug version i happy to help to find the reason. the strange thing is that the same docker file run on my fedora but has different debug logs as seen above.
Hi @lfarkas, following up, we didn't check if the connection with the routing peer was live. Can you confirm if the issue happening and if the connection is there?
it seem i was not clear.
so let's say we have:
10.4.4.2/24
local ip (and wt0: 100.76.114.54
)100.76.171.201
what i already check:
cat /proc/sys/net/ipv4/ip_forward
is 1)10.4.4.0./24
should have to route through it's wt010.4.4.2
it's working (of course 100.76.114.54
also)10.4.4.1
it's working10.4.4.1
it's not working (that's the problem:-)tcpdump icmp -i wt0
shows the incoming ping as:
# tcpdump icmp -i wt0
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on wt0, link-type RAW (Raw IP), snapshot length 262144 bytes
23:17:24.660314 IP 100.76.171.201 > 10.4.4.1: ICMP echo request, id 2, seq 11, length 64
23:17:25.684383 IP 100.76.171.201 > 10.4.4.1: ICMP echo request, id 2, seq 12, length 64
tcpdump icmp -i enp1s0
and finally the output of nft list ruleset
:
# nft list ruleset
# Warning: table ip nat is managed by iptables-nft, do not touch!
table ip nat {
chain DOCKER {
iifname "docker0" counter packets 0 bytes 0 return
iifname "hassio" counter packets 3542 bytes 212520 return
iifname != "hassio" tcp dport 4357 counter packets 0 bytes 0 xt target "DNAT"
iifname != "hassio" tcp dport 8884 counter packets 0 bytes 0 xt target "DNAT"
iifname != "hassio" tcp dport 8883 counter packets 0 bytes 0 xt target "DNAT"
iifname != "hassio" tcp dport 1884 counter packets 0 bytes 0 xt target "DNAT"
iifname != "hassio" tcp dport 1883 counter packets 0 bytes 0 xt target "DNAT"
iifname != "hassio" tcp dport 10000 counter packets 0 bytes 0 xt target "DNAT"
}
chain POSTROUTING {
type nat hook postrouting priority srcnat; policy accept;
oifname != "docker0" ip saddr 172.30.232.0/23 counter packets 541 bytes 32758 xt target "MASQUERADE"
oifname != "hassio" ip saddr 172.30.32.0/23 counter packets 3265 bytes 240661 xt target "MASQUERADE"
ip saddr 172.30.32.6 ip daddr 172.30.32.6 tcp dport 80 counter packets 0 bytes 0 xt target "MASQUERADE"
ip saddr 172.30.33.0 ip daddr 172.30.33.0 tcp dport 8884 counter packets 0 bytes 0 xt target "MASQUERADE"
ip saddr 172.30.33.0 ip daddr 172.30.33.0 tcp dport 8883 counter packets 0 bytes 0 xt target "MASQUERADE"
ip saddr 172.30.33.0 ip daddr 172.30.33.0 tcp dport 1884 counter packets 0 bytes 0 xt target "MASQUERADE"
ip saddr 172.30.33.0 ip daddr 172.30.33.0 tcp dport 1883 counter packets 0 bytes 0 xt target "MASQUERADE"
ip saddr 172.30.33.2 ip daddr 172.30.33.2 tcp dport 10000 counter packets 0 bytes 0 xt target "MASQUERADE"
}
chain PREROUTING {
type nat hook prerouting priority dstnat; policy accept;
xt match "addrtype" counter packets 8643 bytes 1671169 jump DOCKER
}
chain OUTPUT {
type nat hook output priority -100; policy accept;
ip daddr != 127.0.0.0/8 xt match "addrtype" counter packets 3218 bytes 1143443 jump DOCKER
}
}
# Warning: table ip filter is managed by iptables-nft, do not touch!
table ip filter {
chain DOCKER {
iifname != "hassio" oifname "hassio" ip daddr 172.30.32.6 tcp dport 80 counter packets 0 bytes 0 accept
iifname != "hassio" oifname "hassio" ip daddr 172.30.33.0 tcp dport 8884 counter packets 0 bytes 0 accept
iifname != "hassio" oifname "hassio" ip daddr 172.30.33.0 tcp dport 8883 counter packets 0 bytes 0 accept
iifname != "hassio" oifname "hassio" ip daddr 172.30.33.0 tcp dport 1884 counter packets 0 bytes 0 accept
iifname != "hassio" oifname "hassio" ip daddr 172.30.33.0 tcp dport 1883 counter packets 0 bytes 0 accept
iifname != "hassio" oifname "hassio" ip daddr 172.30.33.2 tcp dport 10000 counter packets 0 bytes 0 accept
}
chain DOCKER-ISOLATION-STAGE-1 {
iifname "docker0" oifname != "docker0" counter packets 4656 bytes 588604 jump DOCKER-ISOLATION-STAGE-2
iifname "hassio" oifname != "hassio" counter packets 233708 bytes 21022055 jump DOCKER-ISOLATION-STAGE-2
counter packets 646717 bytes 76050134 return
}
chain DOCKER-ISOLATION-STAGE-2 {
oifname "docker0" counter packets 0 bytes 0 drop
oifname "hassio" counter packets 0 bytes 0 drop
counter packets 238364 bytes 21610659 return
}
chain FORWARD {
type filter hook forward priority filter; policy drop;
counter packets 646717 bytes 76050134 jump DOCKER-USER
counter packets 646717 bytes 76050134 jump DOCKER-ISOLATION-STAGE-1
oifname "docker0" xt match "conntrack" counter packets 4325 bytes 1695386 accept
oifname "docker0" counter packets 0 bytes 0 jump DOCKER
iifname "docker0" oifname != "docker0" counter packets 4656 bytes 588604 accept
iifname "docker0" oifname "docker0" counter packets 0 bytes 0 accept
oifname "hassio" xt match "conntrack" counter packets 399647 bytes 52475750 accept
oifname "hassio" counter packets 4284 bytes 258567 jump DOCKER
iifname "hassio" oifname != "hassio" counter packets 233708 bytes 21022055 accept
iifname "hassio" oifname "hassio" counter packets 4284 bytes 258567 accept
}
chain DOCKER-USER {
counter packets 646717 bytes 76050134 return
}
}
table ip6 nat {
chain DOCKER {
}
}
table ip6 filter {
chain DOCKER {
}
chain DOCKER-ISOLATION-STAGE-1 {
iifname "docker0" oifname != "docker0" counter packets 0 bytes 0 jump DOCKER-ISOLATION-STAGE-2
iifname "hassio" oifname != "hassio" counter packets 0 bytes 0 jump DOCKER-ISOLATION-STAGE-2
counter packets 0 bytes 0 return
}
chain DOCKER-ISOLATION-STAGE-2 {
oifname "docker0" counter packets 0 bytes 0 drop
oifname "hassio" counter packets 0 bytes 0 drop
counter packets 0 bytes 0 return
}
chain FORWARD {
type filter hook forward priority filter; policy drop;
counter packets 0 bytes 0 jump DOCKER-USER
}
chain DOCKER-USER {
counter packets 0 bytes 0 return
}
}
table ip netbird-rt {
chain netbird-rt-fwd {
type filter hook forward priority -99; policy accept;
ip saddr 10.4.4.0/24 ip daddr 100.76.0.0/16 counter packets 0 bytes 0 accept
ip saddr 100.76.0.0/16 ip daddr 10.4.4.0/24 counter packets 97 bytes 9772 accept
ct state ! established,related counter packets 8078 bytes 531152 accept
}
chain netbird-rt-nat {
type nat hook postrouting priority srcnat - 1; policy accept;
ip saddr 10.4.4.0/24 ip daddr 100.76.0.0/16 counter packets 0 bytes 0 masquerade
ip saddr 100.76.0.0/16 ip daddr 10.4.4.0/24 counter packets 0 bytes 0 masquerade
}
}
table ip6 netbird-rt {
chain netbird-rt-fwd {
type filter hook forward priority -99; policy accept;
ct state ! established,related counter packets 0 bytes 0 accept
}
chain netbird-rt-nat {
type nat hook postrouting priority srcnat - 1; policy accept;
}
}
table ip netbird-acl {
chain netbird-acl-input-filter {
type filter hook input priority filter; policy accept;
iifname "wt0" ip saddr 100.76.171.201 accept
iifname "wt0" ip saddr != 100.76.0.0/16 accept
iifname "wt0" drop
}
chain netbird-acl-output-filter {
type filter hook output priority filter; policy accept;
oifname "wt0" ip daddr 100.76.171.201 accept
oifname "wt0" ip daddr != 100.76.0.0/16 accept
oifname "wt0" drop
}
}
if you can tell me the algorithm? or what's wrong? or what else can i help?
We found an issue with the v0.21.3 that causes the route clients to don't update the wireguard endpoint on reconnections.
The PR #960 will solve that. In the mean time, can you check if the issue is related by running netbird down and netbird up on the route client machine?
The container seems to be working ok.
@lfarkas can you check the issue with the latest version?
still not working. but it seems to me you don't read carefully my description and do not understand the problem. the remote peer knows exactly where is the network. what's more route the given packet to the right direction. the problem on client which should have to do the packet forwarding masquerading and routing.
Hello @lfarkas you are correct, I missed the TCPdump logs you've shared and focussed only on the nftables rules. My apologies for missing it.
From the start, we are in a process to merge the route manager and the new ACLs logic to manage the hosts firewalls, that is causing a different behavior you saw with fedora and the container as the the route manager tests first for iptables and the ACL for nftables.
We will align that in the next release.
Regarding the issue, I will run a local test on a similar OS as homeassitant with docker and then I will get back to you.
If you don't mind, can you join our slack and reach out, it might also make troubleshooting faster
I'm on the slack channel. Anyway it'd be useful short description/pseudo code about the iptables nftables routing masquerading acl... May be a Readme it sort of operation manual for those who has linux networking etc knowledge.
working
I'm having the exact same issue on Homeassistant. This is what I did:
I can use netbird to connect to my homeassistant but not to the 192.168.1.0/24 range for which I wanted to use the Homeassstant as routing peer. I keep seeing "2023-09-03T21:13:20+02:00 WARN client/internal/routemanager/client.go:119: the network 192.168.1.0/24 has not been assigned a routing peer as no peers from the list [rIPtd4/hST+ishmoHbL5Dumidt+TqpxCfvIcc5s07ik=] are currently connected" in my logs.
you can try my addon: https://github.com/lfarkas/addon-netbird
I unfortunately got "Can't create container from addon_a3656363_netbird: 400 Client Error for http+docker://localhost/v1.41/containers/create?name=addon_a3656363_netbird: Bad Request ("invalid CapAdd: unknown capability: "CAP_BPF"")" when starting.... and in the process found out that the other repo I was using was al old version, see https://ibb.co/x3jrKXB. I vote for an official repo by @netbirddev :-)
I unfortunately got "Can't create container from addon_a3656363_netbird: 400 Client Error for http+docker://localhost/v1.41/containers/create?name=addon_a3656363_netbird: Bad Request ("invalid CapAdd: unknown capability: "CAP_BPF"")" when starting.... and in the process found out that the other repo I was using was al old version, see https://ibb.co/x3jrKXB. I vote for an official repo by @netbirddev :-)
do you use an up-to-date homeassistant? since it's running a kernel which already support this capabilities...
it's not working again with such messages:
2023-09-11T18:16:30+02:00 WARN client/internal/wgproxy/factory_linux.go:15: failed to initialize ebpf proxy, fallback to user space proxy: field NbXdpProg: program nb_xdp_prog: load program: permission denied: 12: (2d) if r2 > r1 goto pc+148: R1 pointer comparison prohibited (21 line(s) omitted)
Hello @lfarkas Can you share the routes and firewall rules from the host?
# ip route
default via 10.3.3.1 dev enp1s0 proto static metric 100
10.3.3.0/24 dev enp1s0 proto kernel scope link src 10.3.3.40 metric 100
10.30.0.0/24 via 100.92.59.84 dev wt0
100.92.0.0/16 dev wt0 proto kernel scope link src 100.92.59.84
172.30.32.0/23 dev hassio proto kernel scope link src 172.30.32.1
172.30.232.0/23 dev docker0 proto kernel scope link src 172.30.232.1
192.168.0.0/16 via 100.92.59.84 dev wt0
# iptables -L -t nat -n
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DOCKER 0 -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
DOCKER 0 -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE 0 -- 172.30.232.0/23 0.0.0.0/0
MASQUERADE 0 -- 172.30.32.0/23 0.0.0.0/0
MASQUERADE 6 -- 172.30.32.6 172.30.32.6 tcp dpt:80
MASQUERADE 6 -- 172.30.33.0 172.30.33.0 tcp dpt:8884
MASQUERADE 6 -- 172.30.33.0 172.30.33.0 tcp dpt:8883
MASQUERADE 6 -- 172.30.33.0 172.30.33.0 tcp dpt:1884
MASQUERADE 6 -- 172.30.33.0 172.30.33.0 tcp dpt:1883
MASQUERADE 6 -- 172.30.33.1 172.30.33.1 tcp dpt:9541
MASQUERADE 6 -- 172.30.33.2 172.30.33.2 tcp dpt:22
MASQUERADE 6 -- 172.30.33.6 172.30.33.6 tcp dpt:443
MASQUERADE 6 -- 172.30.33.7 172.30.33.7 tcp dpt:10000
Chain DOCKER (2 references)
target prot opt source destination
RETURN 0 -- 0.0.0.0/0 0.0.0.0/0
RETURN 0 -- 0.0.0.0/0 0.0.0.0/0
DNAT 6 -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:4357 to:172.30.32.6:80
DNAT 6 -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8884 to:172.30.33.0:8884
DNAT 6 -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8883 to:172.30.33.0:8883
DNAT 6 -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:1884 to:172.30.33.0:1884
DNAT 6 -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:1883 to:172.30.33.0:1883
DNAT 6 -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:9541 to:172.30.33.1:9541
DNAT 6 -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 to:172.30.33.2:22
DNAT 6 -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8234 to:172.30.33.6:443
DNAT 6 -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:10000 to:172.30.33.7:10000
# iptables -L -t filter -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER 0 -- 0.0.0.0/0 0.0.0.0/0
DOCKER-ISOLATION-STAGE-1 0 -- 0.0.0.0/0 0.0.0.0/0
ACCEPT 0 -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER 0 -- 0.0.0.0/0 0.0.0.0/0
ACCEPT 0 -- 0.0.0.0/0 0.0.0.0/0
ACCEPT 0 -- 0.0.0.0/0 0.0.0.0/0
ACCEPT 0 -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER 0 -- 0.0.0.0/0 0.0.0.0/0
ACCEPT 0 -- 0.0.0.0/0 0.0.0.0/0
ACCEPT 0 -- 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT 6 -- 0.0.0.0/0 172.30.32.6 tcp dpt:80
ACCEPT 6 -- 0.0.0.0/0 172.30.33.0 tcp dpt:8884
ACCEPT 6 -- 0.0.0.0/0 172.30.33.0 tcp dpt:8883
ACCEPT 6 -- 0.0.0.0/0 172.30.33.0 tcp dpt:1884
ACCEPT 6 -- 0.0.0.0/0 172.30.33.0 tcp dpt:1883
ACCEPT 6 -- 0.0.0.0/0 172.30.33.1 tcp dpt:9541
ACCEPT 6 -- 0.0.0.0/0 172.30.33.2 tcp dpt:22
ACCEPT 6 -- 0.0.0.0/0 172.30.33.6 tcp dpt:443
ACCEPT 6 -- 0.0.0.0/0 172.30.33.7 tcp dpt:10000
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 0 -- 0.0.0.0/0 0.0.0.0/0
DOCKER-ISOLATION-STAGE-2 0 -- 0.0.0.0/0 0.0.0.0/0
RETURN 0 -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target prot opt source destination
DROP 0 -- 0.0.0.0/0 0.0.0.0/0
DROP 0 -- 0.0.0.0/0 0.0.0.0/0
RETURN 0 -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN 0 -- 0.0.0.0/0 0.0.0.0/0
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service base-addon-banner: starting
-----------------------------------------------------------
Add-on: NetBird
Connect your devices into a single secure private WireGuard®-based mesh network.
-----------------------------------------------------------
Add-on version: v0.23.1
You are running the latest version of this add-on.
System: Home Assistant OS 10.5 (amd64 / generic-x86-64)
Home Assistant Core: 2023.9.2
Home Assistant Supervisor: 2023.08.3
-----------------------------------------------------------
Please, share the above information when looking for help
or support in, e.g., GitHub, forums or the Discord chat.
-----------------------------------------------------------
s6-rc: info: service base-addon-banner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service base-addon-log-level: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service base-addon-log-level successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service netbird: starting
s6-rc: info: service netbird successfully started
s6-rc: info: service legacy-services: starting
s6-rc: info: service legacy-services successfully started
[08:05:15] INFO: Using Default Admin URL
[08:05:15] INFO: Using Default Management URL
[08:05:15] INFO: No Setup Key Set
[08:05:15] INFO: This client will only show up in dashboards it's already registered with.
[08:05:15] INFO: Using homeassistant as hostname
[08:05:15] INFO: No log level Set
[08:05:15] INFO: This client will use the default logging.
[08:05:15] INFO: Starting NetBird Client...
[08:05:15] INFO: netbird up --foreground-mode --config /config/netbird/config.json --log-file console --hostname homeassistant
2023-09-13T08:05:16+02:00 WARN client/internal/wgproxy/factory_linux.go:15: failed to initialize ebpf proxy, fallback to user space proxy: field NbXdpProg: program nb_xdp_prog: load program: permission denied: 12: (2d) if r2 > r1 goto pc+148: R1 pointer comparison prohibited (21 line(s) omitted)
2023-09-13T08:05:16+02:00 INFO client/internal/routemanager/firewall_linux.go:40: creating an nftables firewall manager for route rules
2023-09-13T08:05:16+02:00 INFO iface/tun_linux.go:15: create tun interface with kernel WireGuard support: wt0
2023-09-13T08:05:16+02:00 INFO signal/client/grpc.go:157: connected to the Signal Service stream
2023-09-13T08:05:16+02:00 INFO client/internal/connect.go:179: Netbird engine started, my IP is: 100.92.59.84/16
2023-09-13T08:05:17+02:00 INFO management/client/grpc.go:143: connected to the Management Service stream
2023-09-13T08:05:17+02:00 INFO client/ssh/server.go:248: starting SSH server on addr: 100.92.59.84:44338
2023-09-13T08:05:17+02:00 WARN client/internal/routemanager/client.go:119: the network 192.168.0.0/16 has not been assigned a routing peer as no peers from the list [GJvDLZutaKUqjmnmUUMf2blC56CsNuTm0Pv6oyA59io=] are currently connected
2023-09-13T08:05:17+02:00 WARN client/internal/routemanager/client.go:119: the network 10.30.0.0/24 has not been assigned a routing peer as no peers from the list [GJvDLZutaKUqjmnmUUMf2blC56CsNuTm0Pv6oyA59io=] are currently connected
2023-09-13T08:05:17+02:00 INFO client/internal/dns/systemd_linux.go:135: adding 1 search domains and 2 match domains. Search list: [netbird.cloud] , Match list: [example.com]
2023-09-13T08:05:17+02:00 INFO client/internal/acl/manager.go:67: ACL rules processed in: 11.446628ms, total rules count: 2
2023-09-13T08:05:18+02:00 INFO client/internal/peer/conn.go:337: connected to peer Xhtkc8TKKows1hGR+hvybTaWO1q1AYMtXct3KyoNf0o=, endpoint address: 10.3.3.5:51820
2023-09-13T08:05:20+02:00 INFO client/internal/peer/conn.go:337: connected to peer GJvDLZutaKUqjmnmUUMf2blC56CsNuTm0Pv6oyA59io=, endpoint address: 18.198.13.240:10443
2023-09-13T08:05:20+02:00 INFO client/internal/routemanager/client.go:122: new chosen route is cjbihoit2r9s73b86f2g with peer GJvDLZutaKUqjmnmUUMf2blC56CsNuTm0Pv6oyA59io= with score 1
2023-09-13T08:05:20+02:00 INFO client/internal/routemanager/client.go:122: new chosen route is cjbijhat2r9s73b86f3g with peer GJvDLZutaKUqjmnmUUMf2blC56CsNuTm0Pv6oyA59io= with score 1
2023-09-13T08:05:20+02:00 INFO client/internal/peer/conn.go:337: connected to peer N5ujg2P51y9lrp/0fbUWVmxDRIZjEkfp0ZTErg8jNVw=, endpoint address: 127.0.0.1:49681
still not working in then latest release
Curious when this will be fixed
@mlsmaycon I'm getting ENHANCE_YOUR_CALM and too_many_pings errors on the management server URL, while I'm using the cloud netbird.io solution. Not even tried to use routes yet.
Is Home Assistant supported therefore?
The full debug info is:
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service base-addon-banner: starting
-----------------------------------------------------------
Add-on: NetBird
Connect your devices into a single secure private WireGuard®-based mesh network.
-----------------------------------------------------------
Add-on version: v0.23.4
You are running the latest version of this add-on.
System: Home Assistant OS 10.5 (aarch64 / yellow)
Home Assistant Core: 2023.9.0
Home Assistant Supervisor: 2023.09.2
-----------------------------------------------------------
Please, share the above information when looking for help
or support in, e.g., GitHub, forums or the Discord chat.
-----------------------------------------------------------
s6-rc: info: service base-addon-banner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service base-addon-log-level: starting
s6-rc: info: service fix-attrs successfully started
Log level is set to DEBUG
s6-rc: info: service base-addon-log-level successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service netbird: starting
s6-rc: info: service netbird successfully started
s6-rc: info: service legacy-services: starting
s6-rc: info: service legacy-services successfully started
[17:12:44] INFO: Using Default Admin URL
[17:12:44] INFO: Using Default Management URL
[17:12:44] INFO: Using [omitted] as Setup Key
[17:12:44] INFO: Using homeassistant as hostname
[17:12:44] INFO: Using debug as logging level
[17:12:45] INFO: Starting NetBird Client...
[17:12:45] INFO: netbird up --foreground-mode --config /config/netbird/config.json --log-file console --setup-key F613DF83-B9D6-458A-A03C-7645E0D8A506 --hostname homeassistant --log-level debug
2023-09-30T17:12:45+02:00 DEBG client/internal/login.go:93: connecting to the Management service https://api.wiretrustee.com:443
2023-09-30T17:12:45+02:00 DEBG client/internal/login.go:63: connected to the Management service https://api.wiretrustee.com:443
2023-09-30T17:12:45+02:00 DEBG client/internal/login.go:93: connecting to the Management service https://api.wiretrustee.com:443
2023-09-30T17:12:45+02:00 DEBG client/internal/login.go:63: connected to the Management service https://api.wiretrustee.com:443
2023-09-30T17:12:45+02:00 DEBG client/internal/connect.go:99: conecting to the Management service api.wiretrustee.com:443
2023-09-30T17:12:45+02:00 DEBG client/internal/connect.go:107: connected to the Management service api.wiretrustee.com:443
2023-09-30T17:12:45+02:00 DEBG signal/client/grpc.go:91: connected to Signal Service: signal.netbird.io:443
2023-09-30T17:12:45+02:00 DEBG client/internal/wgproxy/proxy_ebpf.go:36: instantiate ebpf proxy
2023-09-30T17:12:45+02:00 DEBG client/internal/ebpf/ebpf/wg_proxy_linux.go:11: load ebpf WG proxy
2023-09-30T17:12:45+02:00 INFO client/internal/wgproxy/proxy_ebpf.go:79: local wg proxy listening on: 3128
2023-09-30T17:12:45+02:00 INFO client/internal/routemanager/firewall_linux.go:40: creating an nftables firewall manager for route rules
2023-09-30T17:12:45+02:00 INFO iface/tun_linux.go:15: create tun interface with kernel WireGuard support: wt0
2023-09-30T17:12:45+02:00 DEBG iface/tun_linux.go:58: adding device: wt0
2023-09-30T17:12:45+02:00 DEBG iface/tun_linux.go:109: adding address 100.86.18.18/16 to interface: wt0
2023-09-30T17:12:45+02:00 DEBG iface/tun_linux.go:74: setting MTU: 1280 interface: wt0
2023-09-30T17:12:45+02:00 DEBG iface/tun_linux.go:81: bringing up interface: wt0
2023-09-30T17:12:45+02:00 DEBG iface/iface.go:54: configuring Wireguard interface wt0
2023-09-30T17:12:45+02:00 DEBG iface/wg_configurer_nonandroid.go:26: adding Wireguard private key
2023-09-30T17:12:45+02:00 DEBG client/internal/acl/manager_create_linux.go:33: creating an nftables firewall manager for access control
2023-09-30T17:12:45+02:00 DEBG client/firewall/nftables/manager_linux.go:757: chain INPUT not found. Skiping add allow netbird rule
2023-09-30T17:12:45+02:00 DEBG client/internal/dns/host_linux.go:34: discovered mode is: 3
2023-09-30T17:12:45+02:00 DEBG client/internal/dns/systemd_linux.go:73: got dbus Link interface: /org/freedesktop/resolve1/link/_350 from net interface wt0 and index 50
2023-09-30T17:12:45+02:00 DEBG signal/client/grpc.go:136: signal connection state READY
2023-09-30T17:12:45+02:00 INFO signal/client/grpc.go:157: connected to the Signal Service stream
2023-09-30T17:12:45+02:00 DEBG client/internal/engine.go:549: connecting to Management Service updates stream
2023-09-30T17:12:45+02:00 INFO client/internal/connect.go:179: Netbird engine started, my IP is: 100.86.18.18/16
2023-09-30T17:12:45+02:00 DEBG management/client/grpc.go:116: management connection state READY
2023-09-30T17:12:45+02:00 INFO management/client/grpc.go:143: connected to the Management Service stream
2023-09-30T17:12:45+02:00 DEBG management/client/grpc.go:249: got an update message from Management Service
2023-09-30T17:12:46+02:00 DEBG client/internal/engine.go:575: got TURNs update from Management Service, updating
2023-09-30T17:12:46+02:00 DEBG client/internal/engine.go:557: got STUNs update from Management Service, updating
2023-09-30T17:12:46+02:00 DEBG client/internal/engine.go:606: got peers update from Management Service, total peers to connect to = 1
2023-09-30T17:12:46+02:00 INFO client/ssh/server.go:248: starting SSH server on addr: 100.86.18.18:44338
2023-09-30T17:12:46+02:00 DEBG client/internal/engine.go:825: creating peer connection OzJ2EpDDMURm3ggjxRbFg035NbEehmJr6lwejpJIzRE=
2023-09-30T17:12:46+02:00 DEBG client/internal/dns/service_listener.go:78: starting dns on 100.86.18.18:53
2023-09-30T17:12:46+02:00 INFO client/internal/dns/systemd_linux.go:121: configured 100.86.18.18:53 as main DNS forwarder for this peer
2023-09-30T17:12:46+02:00 INFO client/internal/dns/systemd_linux.go:135: adding 1 search domains and 0 match domains. Search list: [netbird.cloud] , Match list: []
2023-09-30T17:12:46+02:00 INFO client/internal/acl/manager.go:67: ACL rules processed in: 8.446698ms, total rules count: 2
2023-09-30T17:12:47+02:00 DEBG client/internal/peer/conn.go:235: trying to connect to peer OzJ2EpDDMURm3ggjxRbFg035NbEehmJr6lwejpJIzRE=
2023-09-30T17:12:47+02:00 DEBG client/internal/peer/conn.go:266: connection offer sent to peer OzJ2EpDDMURm3ggjxRbFg035NbEehmJr6lwejpJIzRE=, waiting for the confirmation
2023-09-30T17:12:47+02:00 DEBG client/internal/peer/conn.go:621: OnRemoteAnswer from peer OzJ2EpDDMURm3ggjxRbFg035NbEehmJr6lwejpJIzRE= on status Disconnected
2023-09-30T17:12:47+02:00 DEBG client/internal/peer/conn.go:287: received connection confirmation from peer OzJ2EpDDMURm3ggjxRbFg035NbEehmJr6lwejpJIzRE= running version 0.23.3 and with remote WireGuard listen port 51820
2023-09-30T17:12:47+02:00 DEBG client/internal/peer/conn.go:523: peer OzJ2EpDDMURm3ggjxRbFg035NbEehmJr6lwejpJIzRE= ICE ConnectionState has changed to Checking
2023-09-30T17:12:47+02:00 DEBG client/internal/peer/conn.go:506: discovered local candidate udp4 host 192.168.1.5:51820
2023-09-30T17:12:47+02:00 DEBG client/internal/peer/conn.go:506: discovered local candidate udp4 host 172.30.32.1:51820
2023-09-30T17:12:47+02:00 DEBG client/internal/peer/conn.go:506: discovered local candidate udp4 srflx 24.132.91.63:51820 related 0.0.0.0:51820
2023-09-30T17:12:47+02:00 DEBG client/internal/peer/conn.go:635: OnRemoteCandidate from peer OzJ2EpDDMURm3ggjxRbFg035NbEehmJr6lwejpJIzRE= -> udp4 host 192.168.1.11:51820
2023-09-30T17:12:47+02:00 DEBG iface/bind/udp_mux.go:351: ICE: registered 192.168.1.11:51820 for stMVVlfIBtMxxZWc
2023-09-30T17:12:47+02:00 DEBG iface/bind/udp_mux.go:351: ICE: registered 192.168.1.11:51820 for stMVVlfIBtMxxZWcturns:turn.netbird.io:443?transport=tcp
2023-09-30T17:12:47+02:00 DEBG client/internal/peer/conn.go:517: selected candidate pair [local <-> remote] -> [udp4 host 192.168.1.5:51820 <-> udp4 host 192.168.1.11:51820], peer OzJ2EpDDMURm3ggjxRbFg035NbEehmJr6lwejpJIzRE=
2023-09-30T17:12:47+02:00 DEBG iface/iface.go:77: updating interface wt0 peer OzJ2EpDDMURm3ggjxRbFg035NbEehmJr6lwejpJIzRE=, endpoint 192.168.1.11:51820
2023-09-30T17:12:47+02:00 DEBG client/internal/peer/conn.go:523: peer OzJ2EpDDMURm3ggjxRbFg035NbEehmJr6lwejpJIzRE= ICE ConnectionState has changed to Connected
2023-09-30T17:12:47+02:00 INFO client/internal/peer/conn.go:337: connected to peer OzJ2EpDDMURm3ggjxRbFg035NbEehmJr6lwejpJIzRE=, endpoint address: 192.168.1.11:51820
2023-09-30T17:12:47+02:00 DEBG client/internal/peer/conn.go:635: OnRemoteCandidate from peer OzJ2EpDDMURm3ggjxRbFg035NbEehmJr6lwejpJIzRE= -> udp4 srflx 24.132.91.63:1453 related 0.0.0.0:51820
2023-09-30T17:12:47+02:00 DEBG client/internal/peer/conn.go:506: discovered local candidate udp4 relay 18.198.13.240:25264 related 192.168.1.5:59204
2023-09-30T17:12:47+02:00 DEBG client/internal/peer/conn.go:635: OnRemoteCandidate from peer OzJ2EpDDMURm3ggjxRbFg035NbEehmJr6lwejpJIzRE= -> udp4 relay 18.198.13.240:42269 related 192.168.1.11:60626
2023/09/30 17:13:46 ERROR: [transport] Client received GoAway with error code ENHANCE_YOUR_CALM and debug data equal to ASCII "too_many_pings".
2023-09-30T17:13:46+02:00 DEBG management/client/grpc.go:245: disconnected from Management Service sync stream: rpc error: code = Unavailable desc = closing transport due to: connection error: desc = "error reading from server: EOF", received prior goaway: code: ENHANCE_YOUR_CALM, debug data: "too_many_pings"
2023-09-30T17:13:46+02:00 WARN management/client/grpc.go:158: disconnected from the Management service but will retry silently. Reason: rpc error: code = Unavailable desc = closing transport due to: connection error: desc = "error reading from server: EOF", received prior goaway: code: ENHANCE_YOUR_CALM, debug data: "too_many_pings"
@lfarkas FYI
Hello @twl9, thanks for reporting the issue. They don't cause any issue with the client but we will adjust our keepalive configuration on the client and server to prevent them.
routing is still not working:
2023-10-01T21:01:14+02:00 WARN client/internal/routemanager/client.go:119: the network 192.168.0.0/16 has not been assigned a routing peer as no peers from the list [GJvDLZutaKUqjmnmUUMf2blC56CsNuTm0Pv6oyA59io=] are currently connected
2023-10-01T21:01:14+02:00 WARN client/internal/routemanager/client.go:119: the network 10.30.0.0/24 has not been assigned a routing peer as no peers from the list [GJvDLZutaKUqjmnmUUMf2blC56CsNuTm0Pv6oyA59io=] are currently connected
2023/10/01 21:02:14 ERROR: [transport] Client received GoAway with error code ENHANCE_YOUR_CALM and debug data equal to ASCII "too_many_pings".
2023-10-01T21:02:14+02:00 WARN management/client/grpc.go:158: disconnected from the Management service but will retry silently. Reason: rpc error: code = Unavailable desc = closing transport due to: connection error: desc = "error reading from server: EOF", received prior goaway: code: ENHANCE_YOUR_CALM, debug data: "too_many_pings"
@mlsmaycon any update on this one?
Hello, We've found the bug. The PR #1266 will fix it.
Describe the problem Inside Home Assistant you can only run netbird in a docker container (with --net host and NET_ADMIN and NET_RAW capabilities). The vpn itself working but the network routes not. IMHO it has some connection with the iptables/nftables usage. In the kernel ip_forwarding is set eg an icmp (ping) packet arrive in the wt0 interface but do not send out on the local Ethernet interface. Here is the relevant part of the log:
the strange thing is that running netbird in a docker container (on my fedora) the network routes working. so probably the problem is not in the docker part. Is there any technical description how the routing and masquerading should have to work? I see in the code there is iptables and nftables version and sometimes both (since i saw that eg: on my fedora both iptables and nftables are modified by netbird. How can I help to find the problem? With tailscale client the network routes are working so I'm suppose that the route of the problem is not the home assistant's hassio os and not the docker container itself. when do you plan and how to use iptables? and when nftables? routing tables are the same so i assume the problems are not in the routeing tables itself.
i see the line in the log:
client/internal/routemanager/firewall_linux.go:49: iptables is not supported, using nftables
while on my fedora it is:client/internal/routemanager/firewall_linux.go:36: iptables is supported
even though of course nftables also supported on fedora! what's more iptables do NOT contains any modification on my fedora desktop, but nft has. so I don't really understand the log lines