kubernetes-csi / external-resizer

Sidecar container that watches Kubernetes PersistentVolumeClaims objects and triggers controller side expansion operation against a CSI endpoint
Apache License 2.0
126 stars 130 forks source link

Cannot mount resized CephFS-PVC #237

Closed ibotty closed 1 year ago

ibotty commented 1 year ago

I cannot start a new container with the same pvc as a running container. Its events include the following.

  Warning  VolumeResizeFailed  2m2s   kubelet            NodeExpandVolume.NodeExpandVolume failed for volume "pvc-ac2174e8-8e9a-4c31-832e-a845d7cd3280" : Expander.NodeExpand found CSI plugin kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com to not support node expansion

The PVC had been resized before after running into quota issues. The new quota is reflect in the running container and the csi-cephfsplugin-provisioners csi-resizer signals success.

I1109 11:33:33.455618       1 main.go:93] Version : v1.6.0
I1109 11:33:34.457523       1 common.go:111] Probing CSI driver for readiness
I1109 11:33:34.458898       1 leaderelection.go:248] attempting to acquire leader lease rook-ceph/external-resizer-rook-ceph-cephfs-csi-ceph-com...
I1109 11:36:01.428423       1 leaderelection.go:258] successfully acquired lease rook-ceph/external-resizer-rook-ceph-cephfs-csi-ceph-com
I1109 11:36:01.428504       1 controller.go:255] Starting external resizer rook-ceph.cephfs.csi.ceph.com
E1118 07:51:12.251187       1 leaderelection.go:330] error retrieving resource lock rook-ceph/external-resizer-rook-ceph-cephfs-csi-ceph-com: Get "https://172.30.0.1:443/apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-cephfs-csi-ceph-com": http2: client connection lost
W1118 07:51:12.295876       1 reflector.go:347] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolume ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W1118 07:51:12.295876       1 reflector.go:347] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolumeClaim ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
E1118 07:51:41.073989       1 leaderelection.go:367] Failed to update lock: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout
I1123 10:36:53.109592       1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"mail", Name:"dovecot-mail-storage", UID:"ac2174e8-8e9a-4c31-832e-a845d7cd3280", APIVersion:"v1", ResourceVersion:"990884444", FieldPath:""}): type: 'Normal' reason: 'Resizing' External resizer is resizing volume pvc-ac2174e8-8e9a-4c31-832e-a845d7cd3280
I1123 10:36:53.293116       1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"mail", Name:"dovecot-mail-storage", UID:"ac2174e8-8e9a-4c31-832e-a845d7cd3280", APIVersion:"v1", ResourceVersion:"990884444", FieldPath:""}): type: 'Normal' reason: 'VolumeResizeSuccessful' Resize volume succeeded
I1123 10:37:09.030583       1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"mail", Name:"dovecot-mail-storage", UID:"ac2174e8-8e9a-4c31-832e-a845d7cd3280", APIVersion:"v1", ResourceVersion:"990884649", FieldPath:""}): type: 'Normal' reason: 'Resizing' External resizer is resizing volume pvc-ac2174e8-8e9a-4c31-832e-a845d7cd3280
I1123 10:37:09.056571       1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"mail", Name:"dovecot-mail-storage", UID:"ac2174e8-8e9a-4c31-832e-a845d7cd3280", APIVersion:"v1", ResourceVersion:"990884649", FieldPath:""}): type: 'Normal' reason: 'VolumeResizeSuccessful' Resize volume succeeded

The csi-cephfsplugin's csi-cephfsplugin on the node does not log anything, unfortunately.

Environment:

ibotty commented 1 year ago

Sorry for the dupe. Better bug report is #238 .