Closed jinwendaiya closed 1 year ago
@jinwendaiya what is your kubelet path? by default it's linux.kubelet=/var/lib/kubelet
, you may adjust that value if kubelet path is different, details: https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/charts/README.md#tips
@jinwendaiya what is your kubelet path? by default it's
linux.kubelet=/var/lib/kubelet
, you may adjust that value if kubelet path is different, details: https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/charts/README.md#tips There should be no problem with the default path. Use the systemctl status kubelet to view the following figure. The kubelet configuration file is in the/var/lib/kubelet directory
The following is the describe of the failed restart pod
[root@ww01 ~]# kubectl describe pod/csi-nfs-node-8fbft -ncsi-nfs
Name: csi-nfs-node-8fbft
Namespace: csi-nfs
Priority: 0
Node: ww06/104.10.15.6
Start Time: Tue, 03 Jan 2023 15:36:23 +0800
Labels: app=csi-nfs-node
app.kubernetes.io/instance=csi-driver-nfs
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=csi-driver-nfs
app.kubernetes.io/version=v4.1.0
controller-revision-hash=657755d9cd
helm.sh/chart=csi-driver-nfs-v4.1.0
pod-template-generation=1
Annotations:
Warning Unhealthy 11m (x997 over 19h) kubelet (combined from similar events): Liveness probe failed: F0104 02:46:43.857763 16 main.go:159] Kubelet plugin registration hasn't succeeded yet, file=/var/lib/kubelet/plugins/csi-nfsplugin/registration doesn't exist. goroutine 1 [running]: k8s.io/klog/v2.stacks(0x1) /workspace/vendor/k8s.io/klog/v2/klog.go:1038 +0x8a k8s.io/klog/v2.(loggingT).output(0xf86600, 0x3, 0x0, 0xc000362d20, 0x0, {0xc47a41, 0x1}, 0xc0003eeda0, 0x0) /workspace/vendor/k8s.io/klog/v2/klog.go:987 +0x5fd k8s.io/klog/v2.(loggingT).printf(0xa63799, 0x4, 0x0, {0x0, 0x0}, {0xa8ac8d, 0x48}, {0xc0003eeda0, 0x1, 0x1}) /workspace/vendor/k8s.io/klog/v2/klog.go:753 +0x1c5 k8s.io/klog/v2.Fatalf(...) /workspace/vendor/k8s.io/klog/v2/klog.go:1532 main.main() /workspace/cmd/csi-node-driver-registrar/main.go:159 +0x48e
goroutine 3 [chan receive]: k8s.io/klog/v2.(*loggingT).flushDaemon(0xc0001a71a0) /workspace/vendor/k8s.io/klog/v2/klog.go:1181 +0x6a created by k8s.io/klog/v2.init.0 /workspace/vendor/k8s.io/klog/v2/klog.go:420 +0xfb Warning BackOff 2m2s (x3999 over 19h) kubelet Back-off restarting failed container
There is also an important environment that I don't know if it has any influence: the SC used in the container uses k8s/nfs-subdir-external-provisioner
A specific program failure occurred in the complete log above:
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
@jinwendaiya
excuse me, I have the same problem, did you solve it here?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
What happened: After using csi-driver-nfs components to deploy to k8s cluster, it is found that some pods are restarted frequently. after describe the wrong pods, event displays Kubelet plugin registration hasn't succeeded yet ,file =/var/lib/kubelet/plugins/csi-nfsplugin/registoration doesn't exist, which is csi-nfs-node to restart frequently. specifically, the container in it is csi-driver-nfs-registry fault, and pvc using csi-nfsnfs extension cannot be loaded into the application,
What you expected to happen:
How to reproduce it:
Anything else we need to know?: The following is a screenshot of the fault In addition, I found a similar problem in the issue of csi-nfs component of Azure, but I can't help me solve it. The following is the link https://github.com/Azure/secrets-store-csi-driver-provider-azure/issues/829 Environment:
kubectl version
):1.21.14uname -a
):