openebs / dynamic-localpv-provisioner

Dynamically deploy Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is provisioned from simple Local-Hostpath /root storage.
https://openebs.io
Apache License 2.0
143 stars 63 forks source link

xfs quota: wrong soft/hard limits number was set by openebs provisioner #150

Open stoneshi-yunify opened 1 year ago

stoneshi-yunify commented 1 year ago

Describe the bug: A clear and concise description of what the bug is. I installed openebs localpv provisioner via helm chart with xfs quota enabled:

    xfsQuota:
      # If true, enables XFS project quota
      enabled: true
      # Detailed configuration options for XFS project quota.
      # If XFS Quota is enabled with the default values, the usage limit
      # is set at the storage capacity specified in the PVC.
      softLimitGrace: "60%"
      hardLimitGrace: "90%"

Then I created a pvc with 10Gi and running it with a busybox container. However I found openebs set a wrong&bigger soft&hard limits for this pvc.

root@stonetest:~# kubectl get pvc
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
busybox-test   Bound    pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2   10Gi       RWO            openebs-hostpath   4h51m

root@stonetest:~# kubectl get pv pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2 -o yaml | grep path
    openebs.io/cas-type: local-hostpath
    path: /openebs/local/pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2
  storageClassName: openebs-hostpath

root@stonetest:~# mount | grep 45a4
/dev/vdc on /var/lib/kubelet/pods/6f140003-770a-408c-a281-3b1e1faecf0c/volumes/kubernetes.io~local-volume/pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2 type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota)

root@stonetest:~# lsblk -f
NAME FSTYPE FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
vdc  xfs                f993bbb1-d875-4436-ab4d-d7275b2c719c   30.6G    39% /var/lib/kubelet/pods/93ae9893-0374-4197-9084-02f64d8aaba6/volumes/kubernetes.io~local-volume/pvc-e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c
                                                                            /var/lib/kubelet/pods/6f140003-770a-408c-a281-3b1e1faecf0c/volumes/kubernetes.io~local-volume/pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2
                                                                            /openebs

root@stonetest:~# xfs_quota -x
xfs_quota> print
Filesystem          Pathname
/openebs            /dev/vdc (pquota)
/var/lib/kubelet/pods/6f140003-770a-408c-a281-3b1e1faecf0c/volumes/kubernetes.io~local-volume/pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2 /dev/vdc (pquota)
/var/lib/kubelet/pods/93ae9893-0374-4197-9084-02f64d8aaba6/volumes/kubernetes.io~local-volume/pvc-e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c /dev/vdc (pquota)
xfs_quota> report
Project quota on /openebs (/dev/vdc)
                               Blocks
Project ID       Used       Soft       Hard    Warn/Grace
---------- --------------------------------------------------
#0                  0          0          0     00  [0 days]
#1                  0   17179872   20401096     00 [--------]
#2                  0   17179872   20401096     00 [--------]

The openebs provisioner set the soft limit as 17179872KB and hard limit as 20401096KB which exceeded the pvc's capacity (10Gi), I think this is wrong.

The soft limit should be 10Gi 0.6 and the hard limit should be 10Gi 0.9 respectively.

Expected behaviour: A concise description of what you expected to happen The openebs provisioner set the correct soft&hard limits for the pvc

Steps to reproduce the bug:

varDirectoryPath: baseDir: "/openebs"

provisioner: enabled: false

localprovisioner: enabled: true basePath: "/openebs/local" deviceClass: enabled: false hostpathClass:

Name of the default hostpath StorageClass

name: openebs-hostpath
# If true, enables creation of the openebs-hostpath StorageClass
enabled: true
# Available reclaim policies: Delete/Retain, defaults: Delete.
reclaimPolicy: Delete
# If true, sets the openebs-hostpath StorageClass as the default StorageClass
isDefaultClass: false
# Path on the host where local volumes of this storage class are mounted under.
# NOTE: If not specified, this defaults to the value of localprovisioner.basePath.
basePath: "/openebs/local"
# Custom node affinity label(s) for example "openebs.io/node-affinity-value"
# that will be used instead of hostnames
# This helps in cases where the hostname changes when the node is removed and
# added back with the disks still intact.
# Example:
#          nodeAffinityLabels:
#            - "openebs.io/node-affinity-key-1"
#            - "openebs.io/node-affinity-key-2"
nodeAffinityLabels: []
# Prerequisite: XFS Quota requires an XFS filesystem mounted with
# the 'pquota' or 'prjquota' mount option.
xfsQuota:
  # If true, enables XFS project quota
  enabled: true
  # Detailed configuration options for XFS project quota.
  # If XFS Quota is enabled with the default values, the usage limit
  # is set at the storage capacity specified in the PVC.
  softLimitGrace: "60%"
  hardLimitGrace: "90%"
# Prerequisite: EXT4 Quota requires an EXT4 filesystem mounted with
# the 'prjquota' mount option.
ext4Quota:
  # If true, enables XFS project quota
  enabled: false
  # Detailed configuration options for EXT4 project quota.
  # If EXT4 Quota is enabled with the default values, the usage limit
  # is set at the storage capacity specified in the PVC.
  softLimitGrace: "0%"
  hardLimitGrace: "0%"

snapshotOperator: enabled: false

ndm: enabled: false

ndmOperator: enabled: false

ndmExporter: enabled: false

webhook: enabled: false

crd: enableInstall: false

policies: monitoring: enabled: false

analytics: enabled: false

jiva: enabled: false openebsLocalpv: enabled: false localpv-provisioner: openebsNDM: enabled: false

cstor: enabled: false openebsNDM: enabled: false

openebs-ndm: enabled: false

localpv-provisioner: enabled: false openebsNDM: enabled: false

zfs-localpv: enabled: false

lvm-localpv: enabled: false

nfs-provisioner: enabled: false

- create pvc and running a busybox with it
- xfs_quota -x then check the soft/hard limits for the pvc

**The output of the following commands will help us better understand what's going on**:
<!-- (Pasting long output into a [GitHub gist](https://gist.github.com) or other [Pastebin](https://pastebin.com/) is fine.) -->

* `kubectl get pods -n <openebs_namespace> --show-labels`

root@stonetest:~# kubectl get pods -n openebs --show-labels NAME READY STATUS RESTARTS AGE LABELS openebs-localpv-provisioner-5757b495fc-4zflv 1/1 Running 0 5h16m app=openebs,component=localpv-provisioner,name=openebs-localpv-provisioner,openebs.io/component-name=openebs-localpv-provisioner,openebs.io/version=3.3.0,pod-template-hash=5757b495fc,release=openebs


* `kubectl logs <upgrade_job_pod> -n <openebs_namespace>`

root@stonetest:~# kubectl -n openebs logs openebs-localpv-provisioner-5757b495fc-4zflv I1207 03:10:43.992737 1 start.go:66] Starting Provisioner... I1207 03:10:44.018165 1 start.go:130] Leader election enabled for localpv-provisioner via leaderElectionKey I1207 03:10:44.018641 1 leaderelection.go:248] attempting to acquire leader lease openebs/openebs.io-local... I1207 03:10:44.027045 1 leaderelection.go:258] successfully acquired lease openebs/openebs.io-local I1207 03:10:44.027209 1 controller.go:810] Starting provisioner controller openebs.io/local_openebs-localpv-provisioner-5757b495fc-4zflv_9670bed1-7b43-4c70-b187-7f4e514cd24e! I1207 03:10:44.027181 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"openebs", Name:"openebs.io-local", UID:"4838eadf-1cf6-4cec-af4e-8edafba21e87", APIVersion:"v1", ResourceVersion:"2999260", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openebs-localpv-provisioner-5757b495fc-4zflv_9670bed1-7b43-4c70-b187-7f4e514cd24e became leader I1207 03:10:44.128323 1 controller.go:859] Started provisioner controller openebs.io/local_openebs-localpv-provisioner-5757b495fc-4zflv_9670bed1-7b43-4c70-b187-7f4e514cd24e! I1207 03:13:30.749165 1 controller.go:1279] provision "default/busybox-test" class "openebs-hostpath": started I1207 03:13:30.755533 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"busybox-test", UID:"45a4e4f7-8117-4725-a17b-e3446da4b7a2", APIVersion:"v1", ResourceVersion:"2999559", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/busybox-test" I1207 03:13:30.757597 1 provisioner_hostpath.go:76] Creating volume pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2 at node with labels {map[kubernetes.io/hostname:stonetest]}, path:/openebs/local/pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2,ImagePullSecrets:[] 2022-12-07T03:13:45.869Z INFO app/provisioner_hostpath.go:130 {"eventcode": "local.pv.quota.success", "msg": "Successfully applied quota", "rname": "pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2", "storagetype": "hostpath"} 2022-12-07T03:13:45.869Z INFO app/provisioner_hostpath.go:214 {"eventcode": "local.pv.provision.success", "msg": "Successfully provisioned Local PV", "rname": "pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2", "storagetype": "hostpath"} I1207 03:13:45.869571 1 controller.go:1384] provision "default/busybox-test" class "openebs-hostpath": volume "pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2" provisioned I1207 03:13:45.869586 1 controller.go:1397] provision "default/busybox-test" class "openebs-hostpath": succeeded I1207 03:13:45.869594 1 volume_store.go:212] Trying to save persistentvolume "pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2" I1207 03:13:45.872951 1 volume_store.go:219] persistentvolume "pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2" saved I1207 03:13:45.873060 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"busybox-test", UID:"45a4e4f7-8117-4725-a17b-e3446da4b7a2", APIVersion:"v1", ResourceVersion:"2999559", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2 I1207 03:20:00.738275 1 controller.go:1279] provision "xfs/busybox-test" class "openebs-hostpath": started I1207 03:20:00.746069 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"xfs", Name:"busybox-test", UID:"e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c", APIVersion:"v1", ResourceVersion:"3000306", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "xfs/busybox-test" I1207 03:20:00.748003 1 provisioner_hostpath.go:76] Creating volume pvc-e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c at node with labels {map[kubernetes.io/hostname:stonetest]}, path:/openebs/local/pvc-e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c,ImagePullSecrets:[] 2022-12-07T03:20:08.822Z INFO app/provisioner_hostpath.go:130 {"eventcode": "local.pv.quota.success", "msg": "Successfully applied quota", "rname": "pvc-e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c", "storagetype": "hostpath"} 2022-12-07T03:20:08.822Z INFO app/provisioner_hostpath.go:214 {"eventcode": "local.pv.provision.success", "msg": "Successfully provisioned Local PV", "rname": "pvc-e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c", "storagetype": "hostpath"} I1207 03:20:08.822262 1 controller.go:1384] provision "xfs/busybox-test" class "openebs-hostpath": volume "pvc-e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c" provisioned I1207 03:20:08.822275 1 controller.go:1397] provision "xfs/busybox-test" class "openebs-hostpath": succeeded I1207 03:20:08.822282 1 volume_store.go:212] Trying to save persistentvolume "pvc-e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c" I1207 03:20:08.826234 1 volume_store.go:219] persistentvolume "pvc-e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c" saved I1207 03:20:08.826654 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"xfs", Name:"busybox-test", UID:"e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c", APIVersion:"v1", ResourceVersion:"3000306", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c root@stonetest:~#



**Anything else we need to know?:**
Add any other context about the problem here.

**Environment details:**
- OpenEBS version (use `kubectl get po -n openebs --show-labels`): openebs helm chart v3.3.1
- Kubernetes version (use `kubectl version`): v1.23.10
- Cloud provider or hardware configuration:
- OS (e.g: `cat /etc/os-release`): Ubuntu 22.04 LTS
- kernel (e.g: `uname -a`): Linux stonetest 5.15.0-53-generic #59-Ubuntu SMP Mon Oct 17 18:53:30 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
- others:
avishnu commented 1 month ago

Scoping for v4.3 for investigation.

tiagolobocastro commented 1 month ago

Well seems this is working as "expected".

The limit is applied on top of the capacity. So if your PVC capacity is 10GiB and your limit is 60%, so the quote is set to 160%, and thus 16GiB. And btw the max limit is double the PVC capacity, and so 100%.

I'm not sure why it is done this way, perhaps because it doesn't make sense to set a size smaller than the capacity, but I do think it is very confusing and we should at least document this better.