Closed WhiteWaterrrr closed 1 month ago
hey, guys.Is it convenient for you to display your routing or iptables information? I think there may be no corresponding route.
It seems like we're running into the same issue after upgrading to debian 12.5 (Kernel 6.1.76-1).
kubeadm version: v1.28.7
flannel version: v0.24.4
cni version: v1.2.0
We have flannel setup in Dualstack mode on a kubeadm cluster using the following net-conf.json:
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
},
"EnableIPv6": true,
"IPv6Network": "2001:db8:42:0::/56"
}
Interestingly enough the routes were all working directly after the upgrade but after restarting a node the IPv6 routes between pods on different nodes from that restarted node were down. The IPv4 routes however kept working. After restarting the rollout for the kube-flannel daemonset the routes started working again but any node restart will kill them again.
ip -6 route
shows the following before and after restart of the node:
2001:db8:42::/64 via 2001:db8:42:: dev flannel-v6.1 metric 1024 onlink pref medium
2001:db8:42:2::/64 via 2001:db8:42:2:: dev flannel-v6.1 metric 1024 onlink pref medium
2001:db8:42:4::/64 via 2001:db8:42:4:: dev flannel-v6.1 metric 1024 onlink pref medium
2001:db8:42:5::/64 via 2001:db8:42:5:: dev flannel-v6.1 metric 1024 onlink pref medium
2001:db8:42:6::/64 via 2001:db8:42:6:: dev flannel-v6.1 metric 1024 onlink pref medium
2001:db8:42:7::/64 via 2001:db8:42:7:: dev flannel-v6.1 metric 1024 onlink pref medium
2001:db8:42:8::/64 via 2001:db8:42:8:: dev flannel-v6.1 metric 1024 onlink pref medium
2001:db8:42:9:: dev flannel-v6.1 proto kernel metric 256 pref medium
2001:db8:42:9::/64 dev cni0 proto kernel metric 256 pref medium
2001:db8:42:a::/64 via 2001:db8:42:a:: dev flannel-v6.1 metric 1024 onlink pref medium
2001:db8:42:b::/64 via 2001:db8:42:b:: dev flannel-v6.1 metric 1024 onlink pref medium
2001:db8:42:e::/64 via 2001:db8:42:e:: dev flannel-v6.1 metric 1024 onlink pref medium
<SERVER_IPv6>::/64 dev enp0s31f6 proto kernel metric 256 pref medium
fe80::/64 dev enp0s31f6 proto kernel metric 256 pref medium
fe80::/64 dev flannel.1 proto kernel metric 256 pref medium
fe80::/64 dev flannel-v6.1 proto kernel metric 256 pref medium
fe80::/64 dev cni0 proto kernel metric 256 pref medium
fe80::/64 dev vethb74a35fd proto kernel metric 256 pref medium
fe80::/64 dev veth653394f6 proto kernel metric 256 pref medium
default via fe80::1 dev enp0s31f6 metric 1024 onlink pref medium
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I have set up a k8s cluster based on the public network, using the Flannel plugin with VXLAN. Machine A and machine B have different private networks, and A and B can ping each other. However, machine A cannot ping the pod running on machine B. After packet capture analysis, it was found that the pod on machine B received the ping packet and returned the ping packet, but the eth0 network card on machine A did not receive it.