kubernetes-csi / csi-driver-nfs

This driver allows Kubernetes to access NFS server on Linux node.
Apache License 2.0
870 stars 252 forks source link

${pvc.metadata.name} returns the pvc's UID, not it's logical name #747

Open jnm27 opened 2 months ago

jnm27 commented 2 months ago

What happened: In the subdir field of the storage class, I am using subDir: ${pvc.metadata.namespace}/${pvc.metadata.name}

The namespace is correct, but the name returns "prime-$UID", such as: prime-968efac8-99c2-430d-8731-7714e424ad44

This gives no way of identifying the disk on the NFS server, especially if the server on which the PVC was originally created was lost.

The use case here is that a user could either re-attach a PVC, or delete the PVC, when re-installing a host and re-creating VMs under that host using NFS-backed storage.

Environment:

andyzhangx commented 2 months ago

then what's incorrect? ${pvc.metadata.name}? what's the value of subDir you want to create?

jnm27 commented 2 months ago

pvc.metadata.name should be the name, not the uid.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    cdi.kubevirt.io/allowClaimAdoption: "true"
    cdi.kubevirt.io/createdForDataVolume: 85538371-4b3c-49a6-b226-fb2cfbb17aad
    cdi.kubevirt.io/storage.condition.running: "false"
    cdi.kubevirt.io/storage.condition.running.message: Import Complete
    cdi.kubevirt.io/storage.condition.running.reason: Completed
    cdi.kubevirt.io/storage.contentType: kubevirt
    cdi.kubevirt.io/storage.pod.phase: Succeeded
    cdi.kubevirt.io/storage.pod.restarts: "0"
    cdi.kubevirt.io/storage.populator.progress: 100.0%
    cdi.kubevirt.io/storage.preallocation.requested: "false"
    cdi.kubevirt.io/storage.usePopulator: "true"
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: nfs.csi.k8s.io
    volume.kubernetes.io/storage-provisioner: nfs.csi.k8s.io
  creationTimestamp: "2024-08-26T22:41:54Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    alerts.k8s.io/KubePersistentVolumeFillingUp: disabled
    app: containerized-data-importer
    app.kubernetes.io/component: storage
    app.kubernetes.io/managed-by: cdi-controller
    app.kubernetes.io/part-of: hyperconverged-cluster
    app.kubernetes.io/version: 4.16.0
    kubevirt.io/created-by: 6764808e-f2ed-49b6-bfb0-76ad746363a8
  name: vm01-root
  namespace: default
  ownerReferences:
  - apiVersion: cdi.kubevirt.io/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: DataVolume
    name: vm01-root
    uid: 85538371-4b3c-49a6-b226-fb2cfbb17aad
  resourceVersion: "24897"
  uid: 152dfdb4-599d-4643-b52f-57212cdd8554

${pvc.metadata.namespace}/${pvc.metadata.name} uninitutively yields default/prime-152dfdb4-599d-4643-b52f-57212cdd8554 here instead of default/vm01-root

andyzhangx commented 2 months ago

can you share the /CreateVolume related logs in csi driver controller pod? from our e2e tests, "subDir":"${pvc.metadata.namespace}/${pvc.metadata.name}" is parsing correctly:

[pod/csi-nfs-controller-74fc79867-m6s42/nfs] I0827 23:53:55.727919 1 utils.go:110] GRPC call: /csi.v1.Controller/CreateVolume [pod/csi-nfs-controller-74fc79867-m6s42/nfs] I0827 23:53:55.727939 1 utils.go:111] GRPC request: {"capacity_range":{"required_bytes":10737418240},"name":"pvc-424e942c-486c-4f6c-bd3a-6440a42b53ea","parameters":{"csi.storage.k8s.io/pv/name":"pvc-424e942c-486c-4f6c-bd3a-6440a42b53ea","csi.storage.k8s.io/pvc/name":"pvc-h9j2r","csi.storage.k8s.io/pvc/namespace":"nfs-6704","mountPermissions":"0755","onDelete":"archive","server":"nfs-server.default.svc.cluster.local","share":"/","subDir":"${pvc.metadata.namespace}/${pvc.metadata.name}"},"secrets":"stripped","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} ... [pod/csi-nfs-controller-74fc79867-m6s42/nfs] I0827 23:53:55.783217 1 utils.go:117] GRPC response: {"volume":{"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-424e942c-486c-4f6c-bd3a-6440a42b53ea","csi.storage.k8s.io/pvc/name":"pvc-h9j2r","csi.storage.k8s.io/pvc/namespace":"nfs-6704","mountPermissions":"0755","onDelete":"archive","server":"nfs-server.default.svc.cluster.local","share":"/","subDir":"nfs-6704/pvc-h9j2r"},"volume_id":"nfs-server.default.svc.cluster.local##nfs-6704/pvc-h9j2r#pvc-424e942c-486c-4f6c-bd3a-6440a42b53ea#archive"}}

jnm27 commented 2 months ago

Here are the equivalent logs:

I0828 15:11:40.274597       1 utils.go:109] GRPC call: /csi.v1.Controller/CreateVolume
I0828 15:11:40.274621       1 utils.go:110] GRPC request: {"capacity_range":{"required_bytes":136348168127},"name":"pvc-3c9aeec9-39e0-45ee-ad2a-0cd1c6b3d661","parameters":{"csi.storage.k8s.io/pv/name":"pvc-3c9aeec9-39e0-45ee-ad2a-0cd1c6b3d661","csi.storage.k8s.io/pvc/name":"prime-3cbee8d9-0d84-4df7-820e-67fee7f20ac0","csi.storage.k8s.io/pvc/namespace":"default","server":"serverip","share":"/volumename","subDir":"procname/${pvc.metadata.namespace}/${pvc.metadata.name}"},"volume_capabilities":[{"AccessType":{"Mount":{"mount_flags":["nfsvers=4.1"]}},"access_mode":{"mode":5}}]}
:
:
GRPC response: {"volume":{"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-3c9aeec9-39e0-45ee-ad2a-0cd1c6b3d661","csi.storage.k8s.io/pvc/name":"prime-3cbee8d9-0d84-4df7-820e-67fee7f20ac0","csi.storage.k8s.io/pvc/namespace":"default","server":"serverip","share":"/volumename","subDir":"procname/default/prime-3cbee8d9-0d84-4df7-820e-67fee7f20ac0"},"volume_id":"serverip#volumename#procname/default/prime-3cbee8d9-0d84-4df7-820e-67fee7f20ac0#pvc-3c9aeec9-39e0-45ee-ad2a-0cd1c6b3d661#"}}

Reminder that this is the openshift fork (https://github.com/openshift/csi-driver-nfs); do you think this is a problem with that fork or with openshift itself instead of the upstream here?

andyzhangx commented 2 months ago

the pvc name passed to csi driver is "csi.storage.k8s.io/pvc/name":"prime-3cbee8d9-0d84-4df7-820e-67fee7f20ac0"

jnm27 commented 2 months ago

Right. Where does that come from?

andyzhangx commented 2 months ago

Right. Where does that come from?

@jnm27 it's injected by the csi-provisioner

jnm27 commented 2 months ago

Ok... who provides the csi-provisioner? What's the fix? Sorry for all the questions

andyzhangx commented 2 months ago

it's https://github.com/kubernetes-csi/external-provisioner

--extra-create-metadata: Enables the injection of extra PVC and PV metadata as parameters when calling CreateVolume on the driver (keys: "csi.storage.k8s.io/pvc/name", "csi.storage.k8s.io/pvc/namespace", "csi.storage.k8s.io/pv/name")

jnm27 commented 2 months ago

well, I tried taking out the --extra-create-metadata argument from the provisioner deployment spec, and it made it so the variables don't get replaced at all.

├── ${pvc.metadata.namespace} │   └── ${pvc.metadata.name} │   └── disk.img

You mean I should open a ticket at https://github.com/kubernetes-csi/external-provisioner instead?

andyzhangx commented 2 months ago

can you try https://github.com/kubernetes-csi/csi-driver-nfs project? at least from e2e test logs, this project works.

jnm27 commented 2 months ago

I see the same behavior with this project 4.8.0.

jnm27 commented 1 month ago

@andyzhangx any other thoughts? Do you think it's something in particular to how Openshift is interfacing with the CSI driver, and I should open a Redhat ticket instead? It would be really useful to pinpoint what exactly Openshift is doing wrong from the perspective of this driver before going to them, though.