Closed lzarnerg closed 8 months ago
cni0 was create by kubelet , could create a pod to the node which you want to create
The 2nd node added (which doesn't have CNI0:) does have pods on it: kube-flannel, kube-proxy and an apparently working kubelet, but no CNI0: is created. I'm being slow: if Kubelet creates CNI0: how can I see the call from Flannel to kublet to check if that's OK?
The 2nd node added (which doesn't have CNI0:) does have pods on it: kube-flannel, kube-proxy and an apparently working kubelet, but no CNI0: is created. I'm being slow: if Kubelet creates CNI0: how can I see the call from Flannel to kublet to check if that's OK?
flannel and kube-proxy is hostNetwork
, You can create a regular pod using nodeSelector
More investigation shows no veth IF being created, and ip netns is empty on these master nodes. CoreDNS running on 2 worker nodes, so it looks as though Flannel is working correctly with the limited info available. Not clear why network ns should be empty.
cni0 was create by kubelet,Can you show the output of the following command?
docker ps -a | grep -P k8s_ | grep -Ev /pause
ip r s
ip a s
Hi: this command sequence (or component parts) doesn't return anything. System doesn't use docker (or much of podman). Working with a colleague y'day, I learnt that in our system it's containerd that calls flannel and not kubelet. After some fiddling around one system has created cni0 but no flannel IF - seems as though the install-cni-plugin container isn't writing /run/flannel/subnet.env so flannel doesn't do anything.
Which CNI API version does Flannel support? 1.0.0 or something older, perhaps 0.3.1?
Many thanks for your kind assistance.
Expected Behavior
Flannel expected to create cni0: interface as well as flannel.1: interface. Creates cnio: on first node (Master) as expected, but fails to create cni0: on any nodes added after that. Previously worked reliably using same scripted build on RHEL 8.8
Current Behavior
CNI0: created on first node of cluster, but not created on any nodes added after that.
Possible Solution
Given iptables => nf_tables issues over last 18months, might this be related to updated modules?
Steps to Reproduce (for bugs)
Context
Unable to rebuild nodes to change storage (on-prem)
Your Environment