Open javiku opened 1 year ago
Still same, if left capacity not enough, the k8s will stuck for not enough space, can't allocate the thin volume.
I tried to reproduce this but unable to. Below are the details
The VG is this:
VG #PV #LV #SN Attr VSize VFree
dsvg 1 3 0 wz--n- 1020.00m 512.00m
Created two PVCs binding to thin LVs on a thin pool. thinpool is 512MiB.
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-lvmpv-ds-1 Bound pvc-deba93f4-51bd-484b-84e5-188ba295ca8b 500Mi RWO openebs-lvmpv 4s
csi-lvmpv-ds-2 Bound pvc-9aca14e3-8a91-48f6-9709-ca5a54fb6b93 500Mi RWO openebs-lvmpv 4s
The thin LVs got created.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
dsvg_thinpool dsvg twi-aotz-- 500.00m 16.23 11.62
pvc-9aca14e3-8a91-48f6-9709-ca5a54fb6b93 dsvg Vwi-a-tz-- 500.00m dsvg_thinpool 8.11
pvc-deba93f4-51bd-484b-84e5-188ba295ca8b dsvg Vwi-a-tz-- 500.00m dsvg_thinpool 8.11
Now deleted the claims. So thin LVs are deleted.
persistentvolumeclaim "csi-lvmpv-ds-1" deleted
persistentvolumeclaim "csi-lvmpv-ds-2" deleted
LVs got deleted
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices
dsvg_thinpool dsvg twi-aotz-- 500.00m 0.00 10.84 dsvg_thinpool_tdata(0)
[dsvg_thinpool_tdata] dsvg Twi-ao---- 500.00m /dev/loop0(1)
[dsvg_thinpool_tmeta] dsvg ewi-ao---- 4.00m /dev/loop0(126)
[lvol0_pmspare] dsvg ewi------- 4.00m /dev/loop0(0)
Created a new claim using same PVC yaml. The thin LV is created successfully.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices
dsvg_thinpool dsvg twi-aotz-- 500.00m 0.00 10.94 dsvg_thinpool_tdata(0)
[dsvg_thinpool_tdata] dsvg Twi-ao---- 500.00m /dev/loop0(1)
[dsvg_thinpool_tmeta] dsvg ewi-ao---- 4.00m /dev/loop0(126)
[lvol0_pmspare] dsvg ewi------- 4.00m /dev/loop0(0)
pvc-0eb249c8-256a-4958-b479-9502c9b15755 dsvg Vwi-a-tz-- 500.00m dsvg_thinpool 0.00
Please update if this is still an issue, and if there are any other details that can help reproduce this locally.
For example, 20GB VG, already have a thinpool, 15GB, and 5GB free. If claim a new 8GB lv from thinpool, the k8s will stuck for not enough space.
For example, 20GB VG, already have a thinpool, 15GB, and 5GB free. If claim a new 8GB lv from thinpool, the k8s will stuck for not enough space.
I don't think that's a problem here. Please refer below
thin
lv pvc of 400MiB and attached to a pod fio1
. Filled it about 80% with data.thin
lv pvc of 400MiB and attached to another pod fio2
. Works ok.thick
600MiB pvc, that'll fail which is expected because the vg
doesn't have that much space. VG #PV #LV #SN Attr VSize VFree
dsvg 1 1 0 wz--n- 1020.00m 512.00m
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
dsvg_thinpool dsvg twi-aotz-- 500.00m 0.00 10.84
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
dsvg_thinpool dsvg twi-aotz-- 500.00m 70.95 14.06
pvc-80055086-0521-4a18-b64d-3c237dc9d65f dsvg Vwi-aotz-- 400.00m dsvg_thinpool 80.11
pvc-ba7e6ce5-d932-49af-a579-0bf0fd512f3d dsvg Vwi-aotz-- 400.00m dsvg_thinpool 8.58
Now try creating a thick
lvm LV. Expected to fail because vg is 1GiB - out of which 500MiB is taken by thinpool and remaining is 500MiB only.
Normal Provisioning 10s (x6 over 38s) local.csi.openebs.io_openebs-lvm-localpv-controller-7df8f57fb7-6vgvq_4e35f19e-3a9c-486c-8f60-66907ab5faab External provisioner is provisioning volume for claim "default/csi-lvmpv-ds-3"
Warning ProvisioningFailed 7s (x6 over 36s) local.csi.openebs.io_openebs-lvm-localpv-controller-7df8f57fb7-6vgvq_4e35f19e-3a9c-486c-8f60-66907ab5faab failed to provision volume with StorageClass "openebs-lvmpv-thick": rpc error: code = ResourceExhausted desc = no vg available to serve volume request having regex="^dsvg$" & capacity="629145600"
thin
LV PVCs.thin
LV PVC to the existing thinpool.VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert dsvg_thinpool dsvg twi-aotz-- 500.00m 0.00 10.84
- I have a 20GB volume group (VG).
- Create two or more 8GB
thin
LV PVCs.- After several days, the thinpool's LV size increases to more than 12GB, for example, 13GB. Now, the VG has 7GB free disk space, but the thinpool has only allocated 40% of its data, indicating it actually has enough free space.
- Now, I want to create a new 8GB thin LV PVC, but Kubernetes will be stuck due to insufficient space. After purchasing a new disk and adding it to the VG, increasing the free disk space to more than 8GB, I can then create and attach the new
thin
LV PVC to the existing thinpool.- The newly purchased disk space is essentially wasted in this scenario, as it is not being utilized. The original thinpool's space is more than sufficient for my needs. Ideally, the monitoring should focus on the remaining space within the thinpool itself, and it should automatically expand when necessary, rather than preemptively checking if the remaining space exceeds the upper limit of the thin pvc to be allocated.
@graphenn Thank you. Two questions:
It'd be helpful if you can share the real outputs that show this behaviour.
It'll depend upon the settings thin_pool_autoextend_threshold
and thin_pool_autoextend_percent
. With auto thinpool expansion config values thin_pool_autoextend_threshold=50
and thin_pool_autoextend_percent=20
, I can successfully create two 500MiB thin LVs, write 400MiB data, thinpool expands till ~850MiB, and still can provision one more 800MiB thin LV.
However, I'll see if there is some delay or race in the plugin getting free space details with a lag somewhere.
Summarising the issue reported:
VG capacity minus thinpool size
then it'll remain pending. This is expected as LVM behaviour.
What steps did you take and what happened:
[...] I1025 16:59:50.516742 1 grpc.go:81] GRPC response: {"available_capacity":2126512128} [...]
$ kubectl -n openebs get lvmnodes worker-1 -oyaml
apiVersion: local.openebs.io/v1alpha1 kind: LVMNode metadata: creationTimestamp: "2022-10-24T14:13:26Z" generation: 12 name: worker-1 namespace: openebs ownerReferences:
$ kubectl -n kube-system get csistoragecapacities csisc-nbcxt -oyaml
apiVersion: storage.k8s.io/v1beta1 kind: CSIStorageCapacity metadata: creationTimestamp: "2022-10-24T14:13:44Z" generateName: csisc- labels: csi.storage.k8s.io/drivername: local.csi.openebs.io csi.storage.k8s.io/managed-by: external-provisioner name: csisc-nbcxt namespace: kube-system ownerReferences: