ctrox / csi-s3

A Container Storage Interface for S3
Apache License 2.0
758 stars 167 forks source link

Attacher Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock #71

Open joedborg opened 2 years ago

joedborg commented 2 years ago

Provisioner is working fine and is creating buckets in S3. However, the daemon set sits in Container Creation and the attacher is erroring.

$ kubectl logs -l app=csi-provisioner-s3 -c csi-s3 -n kube-system
I0425 15:19:35.175361       1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME
I0425 15:19:35.175367       1 driver.go:93] Enabling volume access mode: SINGLE_NODE_WRITER
I0425 15:19:35.175571       1 server.go:108] Listening for connections on address: &net.UnixAddr{Name:"//var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock", Net:"unix"}
I0425 15:19:35.571124       1 utils.go:97] GRPC call: /csi.v1.Identity/Probe
I0425 15:19:35.572567       1 utils.go:97] GRPC call: /csi.v1.Identity/GetPluginInfo
I0425 15:19:35.573690       1 utils.go:97] GRPC call: /csi.v1.Identity/GetPluginCapabilities
I0425 15:19:35.574246       1 utils.go:97] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
I0425 15:20:48.500878       1 utils.go:97] GRPC call: /csi.v1.Controller/CreateVolume
I0425 15:20:48.500900       1 controllerserver.go:87] Got a request to create volume pvc-0050921d-b7f2-4158-aab9-118231645848
I0425 15:20:48.847037       1 controllerserver.go:133] create volume pvc-0050921d-b7f2-4158-aab9-118231645848
$ kubectl logs pod/csi-attacher-s3-0 -n kube-system
I0425 15:19:28.175346       1 main.go:91] Version: v2.2.0-0-g97411fa7
I0425 15:19:28.177151       1 connection.go:153] Connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:19:38.177314       1 connection.go:172] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:19:48.177288       1 connection.go:172] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:19:58.177282       1 connection.go:172] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:20:08.177357       1 connection.go:172] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:20:18.177324       1 connection.go:172] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:20:28.178425       1 connection.go:172] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:20:38.177322       1 connection.go:172] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:20:48.177307       1 connection.go:172] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
$ kubectl get all -A
NAMESPACE     NAME                                           READY   STATUS              RESTARTS   AGE
kube-system   pod/calico-node-crbmq                          1/1     Running             0          141m
kube-system   pod/coredns-64c6478b6c-w99ts                   1/1     Running             0          141m
kube-system   pod/calico-kube-controllers-75b46474ff-lnlhw   1/1     Running             0          141m
kube-system   pod/csi-attacher-s3-0                          1/1     Running             0          137m
kube-system   pod/csi-provisioner-s3-0                       2/2     Running             0          137m
default       pod/csi-s3-test-nginx                          0/1     ContainerCreating   0          134m
kube-system   pod/hostpath-provisioner-7764447d7c-5xn8q      1/1     Running             0          133m
kube-system   pod/csi-s3-2wshf                               0/2     ContainerCreating   0          133m

NAMESPACE     NAME                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes           ClusterIP   10.152.183.1     <none>        443/TCP                  141m
kube-system   service/kube-dns             ClusterIP   10.152.183.10    <none>        53/UDP,53/TCP,9153/TCP   141m
kube-system   service/csi-provisioner-s3   ClusterIP   10.152.183.22    <none>        65535/TCP                137m
kube-system   service/csi-attacher-s3      ClusterIP   10.152.183.217   <none>        65535/TCP                137m

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/calico-node   1         1         1       1            1           kubernetes.io/os=linux   141m
kube-system   daemonset.apps/csi-s3        1         1         0       1            0           <none>                   137m

NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns                   1/1     1            1           141m
kube-system   deployment.apps/calico-kube-controllers   1/1     1            1           141m
kube-system   deployment.apps/hostpath-provisioner      1/1     1            1           133m

NAMESPACE     NAME                                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-64c6478b6c                   1         1         1       141m
kube-system   replicaset.apps/calico-kube-controllers-75b46474ff   1         1         1       141m
kube-system   replicaset.apps/hostpath-provisioner-7764447d7c      1         1         1       133m

NAMESPACE     NAME                                  READY   AGE
kube-system   statefulset.apps/csi-attacher-s3      1/1     137m
kube-system   statefulset.apps/csi-provisioner-s3   1/1     137m
joedborg commented 2 years ago

Seemed to fix this with

sudo mkdir /var/lib/kubelet/pods
maxkrukov commented 2 years ago

hi. Where did you create this dir?

qyk1995 commented 1 year ago

@maxkrukov Is it solved now? How solved?

fyySky commented 9 months ago

if you meet the same problems, update the provisioner yaml like this . And i had patched the yaml , but i do not know if the author passes it . so i paste it here apiVersion: v1 kind: ServiceAccount metadata: name: csi-provisioner-sa namespace: kube-system

kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: external-provisioner-runner rules: