openebs / lvm-localpv

Dynamically provision Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is integrated with a backend LVM2 data storage stack.
Apache License 2.0
235 stars 92 forks source link

Snapshot is not created : modprobe: can't change directory to '/lib/modules': No such file or directory` #242

Closed todeb closed 11 months ago

todeb commented 1 year ago

What steps did you take and what happened: Followed up with instruction to create test pvc , pod and snapshot. Although snapshot is not ready and I see error:

E0712 15:28:35.312572 1 lvm_util.go:501] lvm: could not create snapshot lvmvg/2bc2186e-cde1-4261-a853-47aa35daab6b cmd [--snapshot --name 2bc2186e-cde1-4261-a853-47aa35daab6b --permission r /dev/lvmvg/pvc-729156ff-a5e3-4397-9d91-56eed8b82307 --size 4294967296b] error: modprobe: can't change directory to '/lib/modules': No such file or directory

What did you expect to happen: Snapshot is created and ready

The output of the following commands will help us better understand what's going on: (Pasting long output into a GitHub gist or other Pastebin is fine.)

Anything else you would like to add: Using RKE2 cluster Kubelet is running directly on host as a root, so should not be issue accessing to path. Creating snapshot manually is working:

lvcreate -v -s -L 300M -n snaptest lvmvg/pvc-729156ff-a5e3-4397-9d91-56eed8b82307
  Setting chunksize to 4.00 KiB.
  Archiving volume group "lvmvg" metadata (seqno 4).
  Creating logical volume snaptest
  Creating volume group backup "/etc/lvm/backup/lvmvg" (seqno 5).
  activation/volume_list configuration setting not defined: Checking only host tags for lvmvg/snaptest.
  Creating lvmvg-snaptest
  Loading table for lvmvg-snaptest (253:6).
  Resuming lvmvg-snaptest (253:6).
  Initializing 4.00 KiB of logical volume lvmvg/snaptest with value 0.
  Removing lvmvg-snaptest (253:6)
  Creating logical volume snapshot1
  Loading table for lvmvg-pvc--729156ff--a5e3--4397--9d91--56eed8b82307-real (253:3).
  Suppressed lvmvg-pvc--729156ff--a5e3--4397--9d91--56eed8b82307-real (253:3) identical table reload.
  Loading table for lvmvg-pvc--729156ff--a5e3--4397--9d91--56eed8b82307 (253:2).
  Suppressed lvmvg-pvc--729156ff--a5e3--4397--9d91--56eed8b82307 (253:2) identical table reload.
  Loading table for lvmvg-2bc2186e--cde1--4261--a853--47aa35daab6b-cow (253:4).
  Suppressed lvmvg-2bc2186e--cde1--4261--a853--47aa35daab6b-cow (253:4) identical table reload.
  Loading table for lvmvg-2bc2186e--cde1--4261--a853--47aa35daab6b (253:5).
  Suppressed lvmvg-2bc2186e--cde1--4261--a853--47aa35daab6b (253:5) identical table reload.
  Creating lvmvg-snaptest-cow
  Loading table for lvmvg-snaptest-cow (253:6).
  Resuming lvmvg-snaptest-cow (253:6).
  Creating lvmvg-snaptest
  Loading table for lvmvg-snaptest (253:7).
  lvmvg/2bc2186e-cde1-4261-a853-47aa35daab6b already not monitored.
  Suspending lvmvg-pvc--729156ff--a5e3--4397--9d91--56eed8b82307 (253:2) with filesystem sync with device flush
  Suspending lvmvg-2bc2186e--cde1--4261--a853--47aa35daab6b (253:5) with filesystem sync with device flush
  Suspending lvmvg-pvc--729156ff--a5e3--4397--9d91--56eed8b82307-real (253:3) with filesystem sync with device flush
  Suspending lvmvg-2bc2186e--cde1--4261--a853--47aa35daab6b-cow (253:4) with filesystem sync with device flush
  Loading table for lvmvg-pvc--729156ff--a5e3--4397--9d91--56eed8b82307-real (253:3).
  Suppressed lvmvg-pvc--729156ff--a5e3--4397--9d91--56eed8b82307-real (253:3) identical table reload.
  Loading table for lvmvg-pvc--729156ff--a5e3--4397--9d91--56eed8b82307 (253:2).
  Suppressed lvmvg-pvc--729156ff--a5e3--4397--9d91--56eed8b82307 (253:2) identical table reload.
  Loading table for lvmvg-2bc2186e--cde1--4261--a853--47aa35daab6b-cow (253:4).
  Suppressed lvmvg-2bc2186e--cde1--4261--a853--47aa35daab6b-cow (253:4) identical table reload.
  Loading table for lvmvg-2bc2186e--cde1--4261--a853--47aa35daab6b (253:5).
  Suppressed lvmvg-2bc2186e--cde1--4261--a853--47aa35daab6b (253:5) identical table reload.
  Loading table for lvmvg-snaptest-cow (253:6).
  Suppressed lvmvg-snaptest-cow (253:6) identical table reload.
  Resuming lvmvg-pvc--729156ff--a5e3--4397--9d91--56eed8b82307-real (253:3).
  Resuming lvmvg-2bc2186e--cde1--4261--a853--47aa35daab6b-cow (253:4).
  Resuming lvmvg-2bc2186e--cde1--4261--a853--47aa35daab6b (253:5).
  Resuming lvmvg-snaptest (253:7).
  Resuming lvmvg-pvc--729156ff--a5e3--4397--9d91--56eed8b82307 (253:2).
  Monitored LVM-nRtZ7fj2ldrDEA3JA8MuTOoxIBtEPxLzr2edZmdOyva2cYJ8bNTamAMWiqaXJT19 for events
  Monitored LVM-nRtZ7fj2ldrDEA3JA8MuTOoxIBtEPxLzbZuSEZhf8HSEysb03Cd0Aq2O0BdTx1BB for events
  Creating volume group backup "/etc/lvm/backup/lvmvg" (seqno 6).
  Logical volume "snaptest" created.

Environment:

todeb commented 1 year ago

Also logs in the snapshot-controller:

kubectl logs openebs-lvm-controller-0 -n kube-system snapshot-controller
I0714 09:06:01.397972       1 leaderelection.go:346] lock is held by rke2-snapshot-controller-7b5b4f946c-xgx9h and has not yet expired
I0714 09:06:01.398036       1 leaderelection.go:248] failed to acquire lease kube-system/snapshot-controller-leader
I0714 09:06:09.666342       1 leaderelection.go:346] lock is held by rke2-snapshot-controller-7b5b4f946c-xgx9h and has not yet expired
I0714 09:06:09.666374       1 leaderelection.go:248] failed to acquire lease kube-system/snapshot-controller-leader
I0714 09:06:20.247829       1 leaderelection.go:346] lock is held by rke2-snapshot-controller-7b5b4f946c-xgx9h and has not yet expired
I0714 09:06:20.247904       1 leaderelection.go:248] failed to acquire lease kube-system/snapshot-controller-leader
I0714 09:06:30.052367       1 leaderelection.go:346] lock is held by rke2-snapshot-controller-7b5b4f946c-xgx9h and has not yet expired
I0714 09:06:30.052382       1 leaderelection.go:248] failed to acquire lease kube-system/snapshot-controller-leader
I0714 09:06:40.199818       1 leaderelection.go:346] lock is held by rke2-snapshot-controller-7b5b4f946c-xgx9h and has not yet expired
todeb commented 1 year ago

After restarting rke2-snapshot-controller open-ebs-lvm controller snapshot controller is able to aquire lock. And snapshot is created:

I0714 09:13:02.677290       1 leaderelection.go:253] successfully acquired lease kube-system/snapshot-controller-leader
I0714 09:13:02.677500       1 leader_election.go:205] became leader, starting
I0714 09:13:02.677375       1 leader_election.go:212] new leader detected, current leader: openebs-lvm-controller-0
I0714 09:13:02.677956       1 snapshot_controller_base.go:133] Starting snapshot controller
I0714 09:13:02.677950       1 reflector.go:219] Starting reflector *v1.VolumeSnapshotContent (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117
I0714 09:13:02.678053       1 reflector.go:255] Listing and watching *v1.VolumeSnapshotContent from github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117
I0714 09:13:02.678175       1 reflector.go:219] Starting reflector *v1.VolumeSnapshotClass (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117
I0714 09:13:02.678239       1 reflector.go:255] Listing and watching *v1.VolumeSnapshotClass from github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117
I0714 09:13:02.678330       1 reflector.go:219] Starting reflector *v1.PersistentVolumeClaim (15m0s) from k8s.io/client-go/informers/factory.go:134
I0714 09:13:02.678400       1 reflector.go:255] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0714 09:13:02.678769       1 reflector.go:219] Starting reflector *v1.VolumeSnapshot (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117
I0714 09:13:02.678802       1 reflector.go:255] Listing and watching *v1.VolumeSnapshot from github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117
I0714 09:13:02.694044       1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader
I0714 09:13:02.853822       1 snapshot_controller_base.go:180] enqueued "snapcontent-2bc2186e-cde1-4261-a853-47aa35daab6b" for sync
I0714 09:13:02.898061       1 snapshot_controller_base.go:163] enqueued "default/tode-snap" for sync
I0714 09:13:02.978242       1 shared_informer.go:270] caches populated
I0714 09:13:02.978349       1 util.go:269] storeObjectUpdate: adding snapshot "default/tode-snap", version 29234423
I0714 09:13:02.978522       1 util.go:269] storeObjectUpdate: adding content "snapcontent-2bc2186e-cde1-4261-a853-47aa35daab6b", version 29234419
I0714 09:13:02.978552       1 snapshot_controller_base.go:485] controller initialized
I0714 09:13:02.978694       1 snapshot_controller_base.go:282] syncContentByKey[snapcontent-2bc2186e-cde1-4261-a853-47aa35daab6b]
I0714 09:13:02.978728       1 util.go:297] storeObjectUpdate updating content "snapcontent-2bc2186e-cde1-4261-a853-47aa35daab6b" with version 29234419
I0714 09:13:02.978742       1 snapshot_controller_base.go:207] syncSnapshotByKey[default/tode-snap]
I0714 09:13:02.978873       1 snapshot_controller_base.go:210] snapshotWorker: snapshot namespace [default] name [tode-snap]
I0714 09:13:02.978923       1 snapshot_controller_base.go:333] checkAndUpdateSnapshotClass [tode-snap]: VolumeSnapshotClassName [lvmpv-snapclass]
I0714 09:13:02.978945       1 snapshot_controller.go:86] synchronizing VolumeSnapshotContent[snapcontent-2bc2186e-cde1-4261-a853-47aa35daab6b]: content is bound to snapshot default/tode-snap
I0714 09:13:02.979110       1 snapshot_controller.go:88] syncContent[snapcontent-2bc2186e-cde1-4261-a853-47aa35daab6b]: check if we should add invalid label on content
I0714 09:13:02.979195       1 snapshot_controller.go:1474] getSnapshotFromStore: snapshot default/tode-snap found
I0714 09:13:02.979224       1 snapshot_controller.go:1040] needsUpdateSnapshotStatus[default/tode-snap]
I0714 09:13:02.979245       1 snapshot_controller.go:143] synchronizing VolumeSnapshotContent for snapshot [default/tode-snap]: update snapshot status to true if needed.
I0714 09:13:02.978992       1 snapshot_controller.go:1239] getSnapshotClass: VolumeSnapshotClassName [lvmpv-snapclass]
I0714 09:13:02.979383       1 snapshot_controller_base.go:353] VolumeSnapshotClass [lvmpv-snapclass] Driver [local.csi.openebs.io]
I0714 09:13:02.979418       1 snapshot_controller_base.go:227] Updating snapshot "default/tode-snap"
I0714 09:13:02.979475       1 snapshot_controller_base.go:363] updateSnapshot "default/tode-snap"
I0714 09:13:02.979519       1 util.go:297] storeObjectUpdate updating snapshot "default/tode-snap" with version 29234423
I0714 09:13:02.979597       1 snapshot_controller.go:180] synchronizing VolumeSnapshot[default/tode-snap]: bound to: "snapcontent-2bc2186e-cde1-4261-a853-47aa35daab6b", Completed: true
I0714 09:13:02.979701       1 snapshot_controller.go:182] syncSnapshot [default/tode-snap]: check if we should remove finalizer on snapshot PVC source and remove it if we can
I0714 09:13:02.979853       1 snapshot_controller.go:952] checkandRemovePVCFinalizer for snapshot [tode-snap]: snapshot status [&v1.VolumeSnapshotStatus{BoundVolumeSnapshotContentName:(*string)(0xc0004b80c0), CreationTime:(*v1.Time)(0xc000351140), ReadyToUse:(*bool)(0xc0000caf69), RestoreSize:(*resource.Quantity)(0xc000116280), Error:(*v1.VolumeSnapshotError)(nil)}]
I0714 09:13:02.980024       1 snapshot_controller.go:191] syncSnapshot[default/tode-snap]: check if we should add invalid label on snapshot
I0714 09:13:02.980051       1 snapshot_controller.go:209] syncSnapshot[default/tode-snap]: validate snapshot to make sure source has been correctly specified
I0714 09:13:02.980112       1 snapshot_controller.go:218] syncSnapshot[default/tode-snap]: check if we should add finalizers on snapshot
I0714 09:13:02.980165       1 snapshot_controller.go:401] syncReadySnapshot[default/tode-snap]: VolumeSnapshotContent "snapcontent-2bc2186e-cde1-4261-a853-47aa35daab6b" found
I0714 09:13:02.980360       1 snapshot_controller_base.go:207] syncSnapshotByKey[default/tode-snap]
I0714 09:13:02.980425       1 snapshot_controller_base.go:210] snapshotWorker: snapshot namespace [default] name [tode-snap]
I0714 09:13:02.980446       1 snapshot_controller_base.go:333] checkAndUpdateSnapshotClass [tode-snap]: VolumeSnapshotClassName [lvmpv-snapclass]
I0714 09:13:02.980465       1 snapshot_controller.go:1239] getSnapshotClass: VolumeSnapshotClassName [lvmpv-snapclass]
I0714 09:13:02.980531       1 snapshot_controller_base.go:353] VolumeSnapshotClass [lvmpv-snapclass] Driver [local.csi.openebs.io]
I0714 09:13:02.980557       1 snapshot_controller_base.go:227] Updating snapshot "default/tode-snap"
I0714 09:13:02.980620       1 snapshot_controller_base.go:363] updateSnapshot "default/tode-snap"
I0714 09:13:02.980653       1 util.go:297] storeObjectUpdate updating snapshot "default/tode-snap" with version 29234423
I0714 09:13:02.980723       1 snapshot_controller.go:180] synchronizing VolumeSnapshot[default/tode-snap]: bound to: "snapcontent-2bc2186e-cde1-4261-a853-47aa35daab6b", Completed: true
I0714 09:13:02.980749       1 snapshot_controller.go:182] syncSnapshot [default/tode-snap]: check if we should remove finalizer on snapshot PVC source and remove it if we can
I0714 09:13:02.980795       1 snapshot_controller.go:952] checkandRemovePVCFinalizer for snapshot [tode-snap]: snapshot status [&v1.VolumeSnapshotStatus{BoundVolumeSnapshotContentName:(*string)(0xc0004b80c0), CreationTime:(*v1.Time)(0xc000351140), ReadyToUse:(*bool)(0xc0000caf69), RestoreSize:(*resource.Quantity)(0xc000116280), Error:(*v1.VolumeSnapshotError)(nil)}]
I0714 09:13:02.980839       1 snapshot_controller.go:191] syncSnapshot[default/tode-snap]: check if we should add invalid label on snapshot
I0714 09:13:02.980895       1 snapshot_controller.go:209] syncSnapshot[default/tode-snap]: validate snapshot to make sure source has been correctly specified
I0714 09:13:02.980976       1 snapshot_controller.go:218] syncSnapshot[default/tode-snap]: check if we should add finalizers on snapshot
I0714 09:13:02.981010       1 snapshot_controller.go:401] syncReadySnapshot[default/tode-snap]: VolumeSnapshotContent "snapcontent-2bc2186e-cde1-4261-a853-47aa35daab6b" found
I0714 09:13:07.706511       1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader
I0714 09:13:12.732579       1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader
I0714 09:13:17.746678       1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader
I0714 09:13:22.761914       1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader
I0714 09:13:27.777172       1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader

Although after redeploy openebs controller it backs to failed to acquire lock:

I0714 09:22:00.891524       1 main.go:71] Version: v4.0.0
I0714 09:22:00.894009       1 main.go:120] Start NewCSISnapshotController with kubeconfig [] resyncPeriod [15m0s]
I0714 09:22:00.894842       1 leaderelection.go:243] attempting to acquire leader lease kube-system/snapshot-controller-leader...
I0714 09:22:00.918917       1 leaderelection.go:346] lock is held by rke2-snapshot-controller-b54578587-j6vfw and has not yet expired
I0714 09:22:00.918965       1 leaderelection.go:248] failed to acquire lease kube-system/snapshot-controller-leader
I0714 09:22:00.918998       1 leader_election.go:212] new leader detected, current leader: rke2-snapshot-controller-b54578587-j6vfw
I0714 09:22:09.555478       1 leaderelection.go:346] lock is held by rke2-snapshot-controller-b54578587-j6vfw and has not yet expired
I0714 09:22:09.555517       1 leaderelection.go:248] failed to acquire lease kube-system/snapshot-controller-leader
I0714 09:22:20.202915       1 leaderelection.go:346] lock is held by rke2-snapshot-controller-b54578587-j6vfw and has not yet expired
I0714 09:22:20.202948       1 leaderelection.go:248] failed to acquire lease kube-system/snapshot-controller-leader
I0714 09:22:29.196701       1 leaderelection.go:346] lock is held by rke2-snapshot-controller-b54578587-j6vfw and has not yet expired
I0714 09:22:29.196739       1 leaderelection.go:248] failed to acquire lease kube-system/snapshot-controller-leader
tode@PLWAW-TODE-X1:/c/git$
todeb commented 1 year ago

Seems that rke2 snapshot controller is able to support openebs lvm snapshots, although it needs a redeploy after installing openebs csi drivers. Volume snapshots are created with it as well. So it looks that openebs snapshot controller is not needed, is it possible to not deploy it?

todeb commented 1 year ago

Also I have the issue restoring snapshot. What is the issue here?

I0714 10:58:10.376378       1 controller.go:1279] provision "default/restore1234" class "openebs-lvmpv": started
I0714 10:58:10.376628       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"restore1234", UID:"e797a20d-b4ca-45ec-b817-0f94cb817293", APIVersion:"v1", ResourceVersion:"30284747", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/restore1234"
I0714 10:58:10.382203       1 controller.go:1053] VolumeSnapshot &{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:tode-snap1234 GenerateName: Namespace:default SelfLink: UID:06131596-7754-42d8-aa61-521dd8a67c1c ResourceVersion:30275679 Generation:1 CreationTimestamp:2023-07-14 10:33:59 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"snapshot.storage.k8s.io/v1","kind":"VolumeSnapshot","metadata":{"annotations":{},"name":"tode-snap1234","namespace":"default"},"spec":{"source":{"persistentVolumeClaimName":"csi-lvmpv"},"volumeSnapshotClassName":"lvmpv-snapclass"}}
] OwnerReferences:[] Finalizers:[snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection snapshot.storage.kubernetes.io/volumesnapshot-bound-protection] ClusterName: ManagedFields:[{Manager:kubectl-client-side-apply Operation:Update APIVersion:snapshot.storage.k8s.io/v1 Time:2023-07-14 10:33:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{".":{},"f:source":{".":{},"f:persistentVolumeClaimName":{}},"f:volumeSnapshotClassName":{}}} Subresource:} {Manager:snapshot-controller Operation:Update APIVersion:snapshot.storage.k8s.io/v1 Time:2023-07-14 10:33:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:finalizers":{".":{},"v:\"snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection\"":{},"v:\"snapshot.storage.kubernetes.io/volumesnapshot-bound-protection\"":{}}}} Subresource:} {Manager:snapshot-controller Operation:Update APIVersion:snapshot.storage.k8s.io/v1 Time:2023-07-14 10:34:01 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{".":{},"f:boundVolumeSnapshotContentName":{},"f:creationTime":{},"f:readyToUse":{},"f:restoreSize":{}}} Subresource:status}]} Spec:{Source:{PersistentVolumeClaimName:0xc0007f9a90 VolumeSnapshotContentName:<nil>} VolumeSnapshotClassName:0xc0007f9aa0} Status:0xc000cee030}
I0714 10:58:10.390185       1 controller.go:1080] VolumeSnapshotContent &{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:snapcontent-06131596-7754-42d8-aa61-521dd8a67c1c GenerateName: Namespace: SelfLink: UID:6eb1ba31-0886-4a7f-af17-723e61adb158 ResourceVersion:30275675 Generation:1 CreationTimestamp:2023-07-14 10:33:59 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[snapshot.storage.kubernetes.io/volumesnapshotcontent-bound-protection] ClusterName: ManagedFields:[{Manager:snapshot-controller Operation:Update APIVersion:snapshot.storage.k8s.io/v1 Time:2023-07-14 10:33:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:finalizers":{".":{},"v:\"snapshot.storage.kubernetes.io/volumesnapshotcontent-bound-protection\"":{}}},"f:spec":{".":{},"f:deletionPolicy":{},"f:driver":{},"f:source":{".":{},"f:volumeHandle":{}},"f:volumeSnapshotClassName":{},"f:volumeSnapshotRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}} Subresource:} {Manager:csi-snapshotter Operation:Update APIVersion:snapshot.storage.k8s.io/v1 Time:2023-07-14 10:34:00 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:readyToUse":{}}} Subresource:}]} Spec:{VolumeSnapshotRef:{Kind:VolumeSnapshot Namespace:default Name:tode-snap1234 UID:06131596-7754-42d8-aa61-521dd8a67c1c APIVersion:snapshot.storage.k8s.io/v1 ResourceVersion:30275642 FieldPath:} DeletionPolicy:Delete Driver:local.csi.openebs.io VolumeSnapshotClassName:0xc0007f9ee0 Source:{VolumeHandle:0xc0007f9ed0 SnapshotHandle:<nil>}} Status:0xc000cee5a0}
I0714 10:58:10.390376       1 controller.go:1091] VolumeContentSource_Snapshot {Snapshot:snapshot_id:"pvc-729156ff-a5e3-4397-9d91-56eed8b82307@snapshot-06131596-7754-42d8-aa61-521dd8a67c1c" }
I0714 10:58:10.390441       1 controller.go:1099] Requested volume size is 4294967296 and snapshot size is 0 for the source snapshot tode-snap1234
W0714 10:58:10.390462       1 controller.go:1106] requested volume size 4294967296 is greater than the size 0 for the source snapshot tode-snap1234. Volume plugin needs to handle volume expansion.
I0714 10:58:10.390512       1 connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
I0714 10:58:10.390529       1 connection.go:184] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"kubernetes.io/hostname":"rke-internal-4-dc1-lgh"}}],"requisite":[{"segments":{"kubernetes.io/hostname":"rke-internal-4-dc1-lgh"}}]},"capacity_range":{"required_bytes":4294967296},"name":"pvc-e797a20d-b4ca-45ec-b817-0f94cb817293","parameters":{"csi.storage.k8s.io/pv/name":"pvc-e797a20d-b4ca-45ec-b817-0f94cb817293","csi.storage.k8s.io/pvc/name":"restore1234","csi.storage.k8s.io/pvc/namespace":"default","storage":"lvm","volgroup":"lvmvg"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"volume_content_source":{"Type":{"Snapshot":{"snapshot_id":"pvc-729156ff-a5e3-4397-9d91-56eed8b82307@snapshot-06131596-7754-42d8-aa61-521dd8a67c1c"}}}}
I0714 10:58:10.391778       1 connection.go:186] GRPC response: {}
I0714 10:58:10.391817       1 connection.go:187] GRPC error: rpc error: code = Unimplemented desc =
I0714 10:58:10.391881       1 controller.go:767] CreateVolume failed, supports topology = true, node selected false => may reschedule = false => state = Finished: rpc error: code = Unimplemented desc =

I0714 10:58:10.391967       1 controller.go:1074] Final error received, removing PVC e797a20d-b4ca-45ec-b817-0f94cb817293 from claims in progress
W0714 10:58:10.391981       1 controller.go:933] Retrying syncing claim "e797a20d-b4ca-45ec-b817-0f94cb817293", failure 8
I0714 10:58:10.392002       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"restore1234", UID:"e797a20d-b4ca-45ec-b817-0f94cb817293", APIVersion:"v1", ResourceVersion:"30284747", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "openebs-lvmpv": rpc error: code = Unimplemented desc =
E0714 10:58:10.392053       1 controller.go:956] error syncing claim "e797a20d-b4ca-45ec-b817-0f94cb817293": failed to provision volume with StorageClass "openebs-lvmpv": rpc error: code = Unimplemented desc =
todeb commented 11 months ago

Closing as the restore and backup functionality is not yet implemented.