Open siredmar opened 8 months ago
Hello @siredmar Probably this change cause your issue. In this release we implemented a really important firewall modification. Because of it has some required refactor. Could you me example iptables rules what cause your issue after the NetBird agent start?
Hi @pappz thanks for responding to my issue!
Here is some information for you to understand the context and the use case.
We are talking about a small embedded Linux device. We are running this device as a kubernetes node. For a CNI plugin like flannel to start up properly there must be a constant interface up and running. So for the workload on the device to run properly even if the device reboots and keeps being offline, there must be some interface that meets flannels requirements.
So, on boot-up an dummy interface called edge0
is created using this script.
After connecting to netbird and wt0 is created this script is ran using some udev rules
The script reads the IP address from wt0 and stores it in a file for flannel to mount it and use it as the public-ip argument. Flannel also interacts with edge0 (with parameter --iface) Here you find the iptables rules that redirects all incoming and outgoing traffic from and to wt0/edge0. When the kubelet is startet it also uses 192.168.168.1 binding edge0. This means that both flannel and kubelet uses the VPN.
These are the only firewall rules (kube-proxy excluded) we set and like i said using netbird 0.24.3 worked like a charm.
Thank you for the detailed explanation. The key difference in this version is that in the older version the agent operated on the input and output chains. After this version we extended it to the routed traffic also. Maybe if you use insert instead of append it could solve your problem.
I tried
iptables -A FORWARD -i wt0 -o edge0 -p tcp --dport 1:65535 -j ACCEPT
iptables -A FORWARD -i edge0 -o wt0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -t nat -A PREROUTING -i wt0 -p tcp --dport 1:65535 -j DNAT --to-destination 192.168.168.1
iptables -t nat -A POSTROUTING -o wt0 -j MASQUERADE
But the same behavior. Flannel is not able to communicate using my edge0 interface.
The NetBird agent can support Nftables and iptabales. I am not sure in that in your case what is the preferred but could you send me the output of this command:
iptables -L -n
sure. I'm not an iptables expert. Here's the output
# iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpts:1:65535
DOCKER-USER all -- 0.0.0.0/0 0.0.0.0/0
DOCKER-ISOLATION-STAGE-1 all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpts:1:65535
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
DROP all -- !127.0.0.0/8 127.0.0.0/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
Chain KUBE-KUBELET-CANARY (0 references)
target prot opt source destination
Chain KUBE-PROXY-CANARY (0 references)
target prot opt source destination
Here is also the log if nft show ruleset
running netbird 0.25.5
table ip netbird {
set nb0000001 {
type ipv4_addr
flags dynamic
elements = { 0.0.0.0 }
}
set nb0000002 {
type ipv4_addr
flags dynamic
elements = { 0.0.0.0 }
}
chain netbird-rt-fwd {
}
chain netbird-rt-nat {
type nat hook postrouting priority srcnat - 1; policy accept;
}
chain netbird-acl-input-rules {
iifname "wt0" accept
}
chain netbird-acl-output-rules {
oifname "wt0" accept
}
chain netbird-acl-input-filter {
type filter hook input priority filter; policy accept;
iifname "wt0" ip saddr 100.127.0.0/16 ip daddr != 100.127.0.0/16 accept
iifname "wt0" ip saddr != 100.127.0.0/16 ip daddr 100.127.0.0/16 accept
iifname "wt0" ip saddr 100.127.0.0/16 ip daddr 100.127.0.0/16 jump netbird-acl-input-rules
iifname "wt0" drop
}
chain netbird-acl-output-filter {
type filter hook output priority filter; policy accept;
oifname "wt0" ip saddr != 100.127.0.0/16 ip daddr 100.127.0.0/16 accept
oifname "wt0" ip saddr 100.127.0.0/16 ip daddr != 100.127.0.0/16 accept
oifname "wt0" ip saddr 100.127.0.0/16 ip daddr 100.127.0.0/16 jump netbird-acl-output-rules
oifname "wt0" drop
}
chain netbird-acl-forward-filter {
type filter hook forward priority filter; policy accept;
iifname "wt0" jump netbird-rt-fwd
oifname "wt0" jump netbird-rt-fwd
iifname "wt0" meta mark 0x000007e4 accept
oifname "wt0" meta mark 0x000007e4 accept
iifname "wt0" jump netbird-acl-input-rules
iifname "wt0" drop
}
chain netbird-acl-prerouting-filter {
type filter hook prerouting priority mangle; policy accept;
iifname "wt0" ip saddr != 100.127.0.0/16 ip daddr 100.127.181.129 meta mark set 0x000007e4
}
}
See details for full rules output
I can see some netbrid entries
These are the the nft rules for netbird-acl running netbird 0.24.3
table ip netbird-acl {
set nb0000001 {
type ipv4_addr
flags dynamic
elements = { 0.0.0.0 }
}
set nb0000002 {
type ipv4_addr
flags dynamic
elements = { 0.0.0.0 }
}
chain netbird-acl-input-filter {
type filter hook input priority filter; policy accept;
iifname "wt0" accept
iifname "wt0" ip saddr != 100.127.0.0/16 accept
iifname "wt0" drop
}
chain netbird-acl-output-filter {
type filter hook output priority filter; policy accept;
oifname "wt0" accept
oifname "wt0" ip daddr != 100.127.0.0/16 accept
oifname "wt0" drop
}
}
See details for full rules output
@pappz do you have any idea?
I run a kubernetes cluster that uses flannel as CNI. I have a dummy interface called edge0 and some iptables rules that forward incoming/outgoing to/from the netbird interface wt0. I narrowed it down to version 0.24.3 that works. Any later version breaks behavior and flannel is not able to connect to the other peers even though the pings to the other peers work just fine.
So my question is: can a project maintainer tell me what changes have been made that may break things between 0.24.3 and 0.24.4? Is there a way (maybe undocumented flag or env) that can be used to let current netbird releases behave like 0.24.3?