openebs / lvm-localpv

Dynamically provision Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is integrated with a backend LVM2 data storage stack.
Apache License 2.0
235 stars 92 forks source link

How can I create a static PVs? #224

Closed cdevacc1 closed 1 month ago

cdevacc1 commented 1 year ago

What steps did you take and what happened: [A clear and concise description of what the bug is, and what commands you ran.] This yaml file create pv but not lv on my disk

apiVersion: v1
kind: PersistentVolume
metadata:
  name: test-pv
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: openebs-lvmpv
  csi:
    driver: local.csi.openebs.io
    volumeHandle: test-pv

What did you expect to happen:

$ sudo lvscan
ACTIVE  'dev/vg01/test-pv' [5.00 Gib] inherit

Environment:

abhilashshetty04 commented 10 months ago

@cdevacc1 Thanks for reporting the issue. Did LVMVolume object got created? Can you share lvm volume controller log for this?

Jean-Daniel commented 3 months ago

Hello,

Im have the same issue. I'm trying to create a PV, but whatever I do, the controller ignore it, and does not create matching LVMVolume objet.

I tried to replicate a PV generated by the controller from a PVC like this:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: test-pv
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 5Gi
  csi:
    driver: local.csi.openebs.io
    fsType: ext4
    volumeAttributes:
      openebs.io/cas-type: localpv-lvm
      openebs.io/volgroup: localpv
    volumeHandle: test-pv
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: openebs.io/nodename
          operator: In
          values:
          - worker-2.cluster
  persistentVolumeReclaimPolicy: Delete
  storageClassName: localpv-delete
  volumeMode: Filesystem

When creating the PV, I only get this log from the controller:

I0331 14:39:28.757229       1 controller.go:1151] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{test-pv    62065ecd-4ba4-4be7-a949-f21c49b2f30d 330330598 0 2024-03-31 14:39:28 +0000 UTC <nil> <nil> map[] map[] [] [kubernetes.io/pv-protection] [{kubectl Apply v1 2024-03-31 14:39:28 +0000 UTC FieldsV1 {"f:spec":{"f:accessModes":{},"f:capacity":{"f:storage":{}},"f:csi":{"f:driver":{},"f:fsType":{},"f:volumeAttributes":{"f:openebs.io/cas-type":{},"f:openebs.io/volgroup":{}},"f:volumeHandle":{}},"f:nodeAffinity":{"f:required":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} }]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{5368709120 0} {<nil>} 5Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:local.csi.openebs.io,VolumeHandle:test-pv,ReadOnly:false,FSType:ext4,VolumeAttributes:map[string]string{openebs.io/cas-type: localpv-lvm,openebs.io/volgroup: localpv,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteOnce],ClaimRef:nil,PersistentVolumeReclaimPolicy:Delete,StorageClassName:localpv-delete,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:&VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:openebs.io/nodename,Operator:In,Values:[worker-2.cluster],},},MatchFields:[]NodeSelectorRequirement{},},},},},},Status:PersistentVolumeStatus{Phase:Pending,Message:,Reason:,},}
I0331 14:39:28.766192       1 controller.go:1151] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{test-pv    62065ecd-4ba4-4be7-a949-f21c49b2f30d 330330600 0 2024-03-31 14:39:28 +0000 UTC <nil> <nil> map[] map[] [] [kubernetes.io/pv-protection] [{kubectl Apply v1 2024-03-31 14:39:28 +0000 UTC FieldsV1 {"f:spec":{"f:accessModes":{},"f:capacity":{"f:storage":{}},"f:csi":{"f:driver":{},"f:fsType":{},"f:volumeAttributes":{"f:openebs.io/cas-type":{},"f:openebs.io/volgroup":{}},"f:volumeHandle":{}},"f:nodeAffinity":{"f:required":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kube-controller-manager Update v1 2024-03-31 14:39:28 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{5368709120 0} {<nil>} 5Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:local.csi.openebs.io,VolumeHandle:test-pv,ReadOnly:false,FSType:ext4,VolumeAttributes:map[string]string{openebs.io/cas-type: localpv-lvm,openebs.io/volgroup: localpv,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteOnce],ClaimRef:nil,PersistentVolumeReclaimPolicy:Delete,StorageClassName:localpv-delete,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:&VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:openebs.io/nodename,Operator:In,Values:[worker-2.cluster],},},MatchFields:[]NodeSelectorRequirement{},},},},},},Status:PersistentVolumeStatus{Phase:Available,Message:,Reason:,},}

No LVMVolume created.

Jean-Daniel commented 3 months ago

After reading the CSI spec, I now understand this is not how it is supposed to work. The CSI CreateVolume API is only invoked for PVC, and Static PV expects that the underlying storage (the LVM volume in the host in that case) is manually provisioned.

orville-wright commented 3 months ago

@Jean-Daniel @cdevacc1 @abhilashshetty04 We've just coded the LVM Dynamic auto-provisioning feature for Mayastor, and enabled it as an experimental feature.

That new code now auto-provisions all LVM entities (PV, VG and LV) and the Filesystem. - There is no need for manual LVM provisioning by the user.

The code is currently being tested as part of Mayastor as an evolution of our DiskGroup. You can now have SPDK or LVM backend managed storage.

Now that we have all that auto-provisioning logic coded, our plan is to roll that out to the LVM-LocalPV Data-Engine also.

Irrespective of what the CSI spec says, we strongly believe that our user experience strategy should be that the user will never need to manually interact with LVM at all.

note: The current code doesn't allow RAIDed LV's yet. That's definitely coming next. (we're also enable it the same Dynamic auto-provisioning for ZFS-LocalPV too.

Hope this helps.

dsharma-dc commented 1 month ago

As mentioned in earlier comment rom Jean, the LVM localPV provisioner is a dynamic provisioner. This means that the PV creation is executed as part of PVC creation and binding. The resulting PV may be immediately created or upon first consumer/application trying to use the PVC, depending upon volume binding policy. I am closing this issue, please let us know if any further questions w.r.t this.