Closed 845630340 closed 11 months ago
I am confused by your setup. Are you using kubernetes or flannel standalone?
If you use, k8s, the etcd backend is not used by flannel so anything written there will be ignored. If you deploy on k8s, the best way to deploy flannel is to use the manifest provided in the repo.
Hi, thanks for your response!
I have experience using kubernetes v1.18 with flannel v0.11.0 as its CNI plugin. The flannel was also deployed using the systemd. I have been storing Flannel's network configuration in etcd, and actually it has been working well. It can successfully store the subnet for each worker node in etcd and set the default route for pods.
This time, I want to upgrade my Kubernetes cluster comprehensively, so I followed my previous experience. However, maybe the version span of Flannel upgrade this time is very large, and I may have missed some important update notifications.
Perhaps I should follow your suggestion and try installing flannel using manifests.
I have deployed flannel using the manifest, and it works fine currently.
Note that you need to configure the pod cidr related parameters in the kube-controller-manager.
Expected Behavior
To have default route on pods so that they can connect to other worker nodes' pods.
Current Behavior
There is no default route on pods. So these pods on different worker nodes can not request with each other.
Possible Solution
Steps to Reproduce (for bugs)
{ "Network": "172.24.0.0/13", "SubnetLen": 22, "Backend": { "Type": "vxlan", "VNI": 1 } }
[root@worker-2 ~]# cat /etc/cni/net.d/01-cri-dockerd.json { "cniVersion": "0.4.0", "name": "dbnet", "type": "bridge", "bridge": "docker0", "ipam": { "type": "host-local", "subnet": "172.24.68.0/22", "gateway": "172.24.68.1" } }
key : /flannel/network/netconfig
{ "name": "cbr0", "type": "flannel", "cniVersion": "0.4.0", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }