okd-project / okd

The self-managing, auto-upgrading, Kubernetes distribution for everyone
https://okd.io
Apache License 2.0
1.68k stars 291 forks source link

No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? #1966

Open fcolomas opened 2 weeks ago

fcolomas commented 2 weeks ago

Describe the bug

I'm trying to install a Compact OpenShift Cluster :

Bootstrap Node 3 Master Nodes 2 Worker Nodes

Now on the Bastion Host when I'm trying to run the following command to get the ready nodes

oc get nodes

NAME STATUS ROLES AGE VERSION master01.ocp4.example.net NotReady control-plane,master 16h v1.27.6+b49f9d1 master02.ocp4.example.net Ready control-plane,master 17h v1.27.6+b49f9d1 master03.ocp4.example.net Ready control-plane,master 17h v1.27.6+b49f9d1

Trying to troubleshoot master01 node

[root@master01 ~]# journalctl -f _SYSTEMD_UNIT=kubelet.service

I get the following error

Dec 26 13:55:09 master01.ocp4.example.net kubenswrapper[6041]: E1226 13:55:09.237534 6041 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-dfvs9" podUID=4cfd7421-4506-4b88-b199-efa7210eef95

Version

4.15.0-0.okd-2024-03-10-010116

How reproducible

100% using the Agent installer for OCI

Log bundle

Referenced the discussion where multiple users have the same problem: https://access.redhat.com/discussions/7050416

I also have it for SCN topology, after applying the solution proposed on the comments it works

fcolomas commented 2 weeks ago

Workaround from Konstantin Rebrov :

I had similar issue with OCP 4.14. fixed it by creating manually config for CNI

cat << EOF | tee /etc/kubernetes/cni/net.d/10-containerd-net.conflist { "cniVersion": "1.0.0", "name": "containerd-net", "plugins": [ { "type": "bridge", "bridge": "cni0", "isGateway": true, "ipMasq": true, "promiscMode": true, "ipam": { "type": "host-local", "ranges": [ [{ "subnet": "10.128.0.0/14" }] ], "routes": [ { "dst": "0.0.0.0/0" }, { "dst": "::/0" } ] } }, { "type": "portmap", "capabilities": {"portMappings": true}, "externalSetMarkChain": "KUBE-MARK-MASQ" } ] } EOF

Here you need to modify your network accordingly Then restart services at the master nodes:

systemctl restart crio; systemctl restart kubelet; systemctl status crio