sergelogvinov / proxmox-csi-plugin

Proxmox CSI Plugin
Apache License 2.0
266 stars 25 forks source link

PersistentVolumeClaim and PersistentVolume Parameters (`--extra-create-metadata`) #205

Open adoerler opened 2 months ago

adoerler commented 2 months ago

Add Support for PersistentVolumeClaim and PersistentVolume Parameters (--extra-create-metadata)

Description

Since v1.6.0+ Kubernetes CSI the flag --extra-create-metadata exists which passes the following information to a CreateVolumeRequest:

Here you can see the corresponding merge request.

This should allow something like: vm-9999-pvc-3b76c8aa-1024-4f2e-88ca-8b3e27e27f65-namespace-pvc_name-pv_name or vm-9999-pvc-namespace-pvc_name-pv_name-3b76c8aa-1024-4f2e-88ca-8b3e27e27f65 which would make it a lot easier to see which VM disk is belongs to a Kubernetes resource.

What do you think?

Congratulations for this great Proxmox plugin!

sergelogvinov commented 2 months ago

Thank you for the idea. Well known clouds use the labels/tags for that, unfortunately proxmox does not have tags for the block devices.

In kubernetes, PV does not belong to the PVC, you can change/recreate PVC with the same PV. Proxmox does not have API to rename the block devices (PV).

I noticed that recreate/rename PVC in kubernetes very often case. Adding the namespace or PVC name to the PV name could cause confusion for operators in the future.

Do you have any ideas on how we could address this issue?

adoerler commented 2 months ago

Hi @sergelogvinov ,

Well known clouds use the labels/tags for that, unfortunately proxmox does not have tags for the block devices.

there are no tags, but there are notes which could be used to give some hints about who created the PV in the first place.

They can be get/set using the pvesh or HTTP-API command, e.g.

root@pve7demo1:~# pvesh set /nodes/pve7demo1/storage/blockbridge1/content/vm-100-disk-0 -notes "volume notes for volume0"
root@pve7demo1:~# pvesh get /nodes/pve7demo1/storage/blockbridge1/content
┌────────┬────────────┬────────────────────────────────┬────────────┬───────────┬──────────────────────────┬────────┬───────────┬────────────┬──────────────┬──────┐
│ format │       size │ volid                          │      ctime │ encrypted │ notes                    │ parent │ protected │       used │ verification │ vmid │
╞════════╪════════════╪════════════════════════════════╪════════════╪═══════════╪══════════════════════════╪════════╪═══════════╪════════════╪══════════════╪══════╡
│ raw    │ 132.00 GiB │ blockbridge1:vm-100-disk-0     │ 1645724996 │           │ volume notes for volume0 │        │           │     0.00 B │              │  100 │

API Doc can be found here.

Unfortunately the availability of notesdepends on the storage backend being used. Currently only Dir, PBS, Cephfs, NFS, CIFS, Blockbridge is supported.

As we are using Linstor with linstor-proxmox plugin, this wouldn't be helpful, as the set command does not seem to be implemented.

pvesh set /nodes/proxmox01/storage/pve-sp-hdd01-k8s/content -notes "my desc"
No 'set' handler defined for '/nodes/proxmox01/storage/pve-sp-hdd01-k8s/content'

Next "problem" is, that notes are not shown in the proxmox UI.

Proxmox does not have API to rename the block devices (PV).

This seems true either, unfortunately :-)

I noticed that recreate/rename PVC in kubernetes very often case. Adding the namespace or PVC name to the PV name could cause confusion for operators in the future. Do you have any ideas on how we could address this issue?

Fair point. One could make this feature optional and inform about this fact in the documentation. Or if notes would have been available in general, each creation/rename operation could append some sort of log entry to the PV notes field.

As I digged deeper into this topic, I understand that this wont make to much sense until proxmox provides a better way regarding either the naming of PVs or adds a tagging system to the resources.

sergelogvinov commented 2 months ago

Yes, it's sad news. I spoke with the Proxmox team once, and their main point was that a Block Device belongs to the VM. This "rule" also affects PV snapshots. Therefore, there is no way to implement PV snapshots too. ;(