openebs-archive / dynamic-nfs-provisioner

Operator for dynamically provisioning an NFS server on any Kubernetes Persistent Volume. Also creates an NFS volume on the dynamically provisioned server for enabling Kubernetes RWX volumes.
Apache License 2.0
169 stars 58 forks source link

[Question] How to share the same volume across namespace. #163

Closed Rohithzr closed 1 year ago

Rohithzr commented 1 year ago

Now I am assuming I am doing something wrong.

My use case is a simple one, I have a multi-tenant system where I want to have a shared storage (1TB) across all tenants mounted on the /shared-data folder on each pod that I create.

helm-values.yaml

ndm:
  enabled: false
ndmOperator:
  enabled: false
localprovisioner:
  enabled: false
nfs-provisioner:
  enabled: true
mayastor:
  enabled: false

shared-sc.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: common-openebs-rwx
  annotations:
    openebs.io/cas-type: nfsrwx
    cas.openebs.io/config: |
      - name: BackendStorageClass
        value: "csi-cinder-high-speed"
provisioner: openebs.io/nfsrwx
reclaimPolicy: Retain

main-pvc

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-001
  namespace: main-workspace
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 100Gi
  storageClassName: common-openebs-rwx

applying the main-pvc creates a volume on the csi-cinder-high-speed with name "pvc-08cdeb38-aff5-4060-bed1-fe4ac38d00b6" Now I want to use the same volume on a different namespace.

shared-pvc

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-002
  namespace: client-workspace
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 100Gi
  storageClassName: common-openebs-rwx
  volumeMode: Filesystem
  volumeName: pvc-08cdeb38-aff5-4060-bed1-fe4ac38d00b6

If I apply the above PVC - I get the error

volume "pvc-08cdeb38-aff5-4060-bed1-fe4ac38d00b6" already bound to a different claim.

If I remove the volume details from that pvc - it creates a new volume on csi-cinder-high-speed` storage class.

What am I missing here?

dsharma-dc commented 1 year ago

I don't think it's allowed by design to independently use the backend PV to attach to a different namespaces. @niladrih ?

Rohithzr commented 1 year ago

@dsharma-dc then I would ask what type of use-case a NFS will serve if two systems cannot share files. I know within the same namespace it can, but shouldn't there be a way to share across NS?

dsharma-dc commented 1 year ago

@Rohithzr Sharing across NS would mean breaching the multi-tenancy boundary I'd think. The first namespaced object here is PVC. Drawing an analogy for example to a traditional NAS filer, the volumes in different tenant namespaces can't know or share filesystemIDs and filehandles for NFS volumes. Similarly, a PVC when bound to a namespace represents that unit of separation here. @avishnu @niladrih @Abhinandan-Purkait If you have any more context around history of nfs-provisioner, kindly elaborate here. May be there is a better explanation available.

Rohithzr commented 1 year ago

@dsharma-dc Then maybe openebs should provide a method to skip the PVC involvement to make this as a feature. I say this because openebs already has all the components available to make it happen.

I have now deployed a third party NFS server and used the direct NFS mount to share the folder, I also faced a scenario where I cannot use NFS mount directly, there I created a PV and PVC pointed to the NFS server in the same namespace.

There is also the persistentVolumeReclaimPolicy: Recycle as a setting that can be used to send the message regarding the shareability of this volume.

FYI: there is no storage class named nfs in my cluster, it's just a pointer for the PVC.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-models
  labels:
    storage.k8s.io/name: nfs
spec:
  accessModes:
    - ReadWriteOnce
    - ReadOnlyMany
    - ReadWriteMany
  capacity:
    storage: 250Gi
  storageClassName: nfs
  persistentVolumeReclaimPolicy: Recycle
  volumeMode: Filesystem
  nfs:
    server: "nfs-server-ip"
    path: /mnt/nfs/my-models
    readOnly: no
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: tx-gen-galileo-001-112233444-nfs-pvc
  namespace: user-user-example-com
  labels:
    storage.k8s.io/name: nfs
spec:
  storageClassName: nfs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi
dsharma-dc commented 1 year ago

There is currently no plan address this use-case. Might be addressed at some point in future if there are some significant changes in enhancement and design.