Closed jtackaberry closed 2 years ago
Hi, from the first view, all was done right. PVCs must not created before, you can simply create a vg from a given block device or a list of block devices.
I guess your pod will mount the pvc when you delete it.
What OS is you worker node running ?
I guess your pod will mount the pvc when you delete it.
This is actually the revelation, and what's missing in my reproduction steps above: the PV isn't actually provisioned until a pod mounts the PVC. I tried creating the pod while the PVC is Pending, and things are working: the VG is created, the PV is provisioned and bound, and the pod starts.
I never got as far as creating a pod, because I figured what was the point if the PVC was stuck in pending state? Every other CSI driver I have experience with so far immediately provisions a PV and binds it when a PVC is created, so I'm embarrassed to say I never bothered creating a pod, because I was expecting csi-driver-lvm to work this way as well.
Can I humbly suggest this as an improvement? IMO it's surprising behavior to defer PV creation until after some pod mounts the PVC.
What OS is you worker node running ?
Apologies for not mentioning. Ubuntu 20.04.3.
No it cannot create the pv unless the pod is created, because this csi driver is a local-storage provider and therefor it is required to know on which node the pod get scheduled.
No it cannot create the pv unless the pod is created, because this csi driver is a local-storage provider and therefor it is required to know on which node the pod get scheduled.
Hah. You're completely right of course, I have no explanation for my momentary demonstration of stupidity. :)
Perhaps a quick note in the README might be helpful for the absentminded like me to remind us that local-storage providers will work differently than network-storage providers in this regard?
Thanks for your patience @majst01. Will close as this isn't a bug and I'm up and running.
No Problem.
Not sure if this is a bug report or a support request, but in any case I can't spot what's going awry.
Fresh install of microk8s 1.23 and csi-driver-lvm v0.4.1 via the Helm chart at https://github.com/metal-stack/helm-charts/tree/master/charts/csi-driver-lvm (which supports
StorageClass
understorage.k8s.io/v1
).The first sign of trouble comes from the plugin pod, where it raises a couple errors:
Over on the k8s node,
/dev/sdb
does exist perlvm.devicePattern
:While the documentation doesn't say this is necessary, I didn't see any indication from the code that
pvcreate
is called. So I figured perhaps that was the problem, and explicitly created it (which also demonstrates that the LVM command line tools are functional on the host):No change. Still the
Volume group "csi-lvm" not found"
errors from the plugin pod logs. Ok, this ostensibly shouldn't be necessary, but let's create it manually:This has addressed the errors from the plugin logs:
But that didn't fix the pending PVC, even after recreating it:
Hopefully it's clear where things have gone wrong. :)
Thanks!