openebs / lvm-localpv

Dynamically provision Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is integrated with a backend LVM2 data storage stack.
Apache License 2.0
260 stars 99 forks source link

online resizing? #136

Closed davidkarlsen closed 7 months ago

davidkarlsen commented 3 years ago

Describe the problem/challenge you have

after editing pvc, I see

Conditions:
  Type                      Status  LastProbeTime                     LastTransitionTime                Reason  Message
  ----                      ------  -----------------                 ------------------                ------  -------
  FileSystemResizePending   True    Mon, 01 Jan 0001 00:00:00 +0000   Tue, 24 Aug 2021 10:39:44 +0200           Waiting for user to (re-)start a pod to finish file system resize of volume on node.
Events:
  Type     Reason                    Age   From                                   Message
  ----     ------                    ----  ----                                   -------
  Warning  ExternalExpanding         22s   volume_expand                          Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.
  Normal   Resizing                  22s   external-resizer local.csi.openebs.io  External resizer is resizing volume pvc-5e4376ec-44bb-4d1a-a734-60517dc8387f
  Normal   FileSystemResizeRequired  21s   external-resizer local.csi.openebs.io  Require file system resize of volume on node

which requires pod-restart

Describe the solution you'd like would it be possible to online-reszing? A vanilla xfs fileystem which is mounted on a VM can be resized while mounted.

Anything else you would like to add:

Environment:

dsharma-dc commented 7 months ago

openEBS lvm CSI driver does support resizing of filesysem automatically/online. The fact that the log above show FileSystemResizePending means we have told CO(openshift here) to send NodeExpandVolume call. Upon receiving that, fileystem is resized. Need to see if it's a problem from CO to dispatch the gRPC or some bug in this CSI driver. Will try locally.

dsharma-dc commented 7 months ago

Online resize is working as expected. Probably the thing that is missing in your setup is that no app is using the pvc yet?

I have a 6GiB pvc that I edit to 7GiB. I see the pv and pvc both are 7GiB after this, without restarting any pod. The underlying LVM LV also is resized.

$ kubectl get pvc
NAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS            AGE
csi-lvmpv                Bound    pvc-62b22727-a3cb-4703-a750-6f11294633c9   7Gi        RWO            openebs-lvmpv           29m

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                            STORAGECLASS            REASON   AGE
pvc-62b22727-a3cb-4703-a750-6f11294633c9   7Gi        RWO            Delete           Bound    default/csi-lvmpv                openebs-lvmpv                    29m

$ sudo lvs
  LV                                       VG      Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  pvc-62b22727-a3cb-4703-a750-6f11294633c9 dslvmvg -wi-ao---- 7.00g    

The node plugin shows filesystem expansion call received.

I0417 12:24:05.779179       1 grpc.go:72] GRPC call: /csi.v1.Node/NodeExpandVolume requests {"capacity_range":{"required_bytes":7516192768},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/local.csi.openebs.io/92b7e8f34b0420126810740b83341968b843bb8806e38d46a873d31ad4e12362/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_id":"pvc-62b22727-a3cb-4703-a750-6f11294633c9","volume_path":"/var/lib/kubelet/pods/161979a3-0045-49ff-93c9-8894c2f16300/volumes/kubernetes.io~csi/pvc-62b22727-a3cb-4703-a750-6f11294633c9/mount"}
I0417 12:24:06.062936       1 grpc.go:81] GRPC response: {"capacity_bytes":7516192768}

I'm closing this issue. Please comment or reopen if any questions around this.