Pods
What you expected to happen: Pods were not scheduled and the below errors were appearing in /var/log/messages
May 20 14:13:20 ace-func1-3n1 kubelet[1683]: E0520 14:13:20.399208 1683 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4fd3ede-c09e-4a3f-abd2-8eeb6ba0b3da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d02259dca386dfd70b0d6433452e99a880b57e849f79f74993ace14d90e55fb9\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (delete): Multus: error getting k8s client: GetK8sClient: failed to get context for the kubeconfig /etc/cni/net.d/multus.d/multus.kubeconfig: error loading config file \\\"/etc/cni/net.d/multus.d/multus.kubeconfig\\\": yaml: line 7: mapping values are not allowed in this context\"" pod="kube-system/coredns-58f4964b57-f8wkh" podUID="d4fd3ede-c09e-4a3f-abd2-8eeb6ba0b3da"
How to reproduce it (as minimally and precisely as possible): Reboot a node in multi-node Kubernetes cluster
Anything else we need to know?:
Environment:
Multus version : ghcr.io/k8snetworkplumbingwg/multus-cni:v4.0.2
Kubernetes version (use kubectl version):
Client Version: v1.29.1 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.29.1
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 9"
REDHAT_BUGZILLA_PRODUCT_VERSION=9.3
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="9.3"
`
File of '/etc/cni/net.d/'
File of '/etc/cni/multus/net.d'
NetworkAttachment info (use kubectl get net-attach-def -o yaml)
Target pod yaml info (with annotation, use kubectl get pod <podname> -o yaml)
`
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2024-05-20T09:02:12Z"
generateName: kube-multus-ds-
labels:
app: multus
controller-revision-hash: 789c4467b8
name: multus
pod-template-generation: "1"
tier: node
name: kube-multus-ds-hxtkt
namespace: kube-system
ownerReferences:
What happend: Apiserver IP address was empty in multus kubeconfig file
Pods What you expected to happen: Pods were not scheduled and the below errors were appearing in /var/log/messages
How to reproduce it (as minimally and precisely as possible): Reboot a node in multi-node Kubernetes cluster
Anything else we need to know?:
Environment:
kubectl version
):Client Version: v1.29.1 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.29.1
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 9" REDHAT_BUGZILLA_PRODUCT_VERSION=9.3 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="9.3" `
kubectl get net-attach-def -o yaml
)kubectl get pod <podname> -o yaml
) ` apiVersion: v1 kind: Pod metadata: creationTimestamp: "2024-05-20T09:02:12Z" generateName: kube-multus-ds- labels: app: multus controller-revision-hash: 789c4467b8 name: multus pod-template-generation: "1" tier: node name: kube-multus-ds-hxtkt namespace: kube-system ownerReferences: