ceph / ceph-csi

CSI driver for Ceph
Apache License 2.0
1.27k stars 539 forks source link

Cephfs based pvc causes KubePersistentVolumeInodesFillingUp alert #3713

Closed adux6991 closed 1 year ago

adux6991 commented 1 year ago

Describe the bug

Pod with cephfs based pvc shows limited inode information, causing prometheus to raise KubePersistentVolumeInodesFillingUp alert.

Environment details

  1. create cephfs in ceph cluster
  2. create storage class and other related resources in k8s cluster
  3. create pvc and pod
    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: Pod
    metadata:
    name: cephfs-demo-pod
    spec:
    containers:
    - name: web-server
      image: nginx:latest
      resources:
        limits:
          cpu: 100m
          memory: 100Mi
      volumeMounts:
        - name: mypvc
          mountPath: /var/lib/www
    volumes:
    - name: mypvc
      persistentVolumeClaim:
        claimName: cephfs-pvc
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: cephfs-pvc
    spec:
    accessModes:
    - ReadWriteMany
    resources:
    requests:
      storage: 1Gi
    storageClassName: csi-cephfs-sc
    EOF
  4. kubectl exec into pod
    kubectl exec -it cephfs-demo-pod -- /bin/bash
    root@cephfs-demo-pod:/# df -i
    Filesystem                                                                                                        Inodes  IUsed    IFree IUse% Mounted on
    overlay                                                                                                         36462592 735679 35726913    3% /
    tmpfs                                                                                                           32964258     17 32964241    1% /dev
    tmpfs                                                                                                           32964258     17 32964241    1% /sys/fs/cgroup
    /dev/mapper/ubuntu--vg-lv--0                                                                                    36462592 735679 35726913    3% /etc/hosts
    shm                                                                                                             32964258      1 32964257    1% /dev/shm
    10.20.65.32:6789:/volumes/csi/csi-vol-1765fe5e-bc90-11ed-b91a-962c376839e9/6bb86d4d-a3ab-4dc2-80c8-b75b91489edd    11521      -        -     - /var/lib/www
    tmpfs                                                                                                           32964258      9 32964249    1% /run/secrets/kubernetes.io/serviceaccount
    tmpfs                                                                                                           32964258      1 32964257    1% /proc/acpi
    tmpfs                                                                                                           32964258      1 32964257    1% /proc/scsi
    tmpfs                                                                                                           32964258      1 32964257    1% /sys/firmware

Actual results

df -i reports - for IUsed / IFree / IUse%, which is regarded as zero by kubelet and causes KubePersistentVolumeInodesFillingUp alert.

Expected behavior

report actual inode usage

humblec commented 1 year ago

cc @nixpanic

nixpanic commented 1 year ago

Commit b7703faf37f5905f8e1c83f0c15a01df6cdbb181 should have addressed this. It was merged through PR #3407 and is part of v3.8.0.

Please update to a more recent Ceph-CSI version and let us know if the problem still persists.

adux6991 commented 1 year ago

Thanks a lot! Deploying a new version resolves the problem.