Open tcoupin opened 4 years ago
$ k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-drive-preprod-mariadb-galera-0 Bound pvc-5cb1551b-8958-4c4f-8298-42c8c09ab896 1Gi RWO rook-ceph-block 35h
data-drive-preprod-mariadb-tooling-backup Bound pvc-6ccbd327-3f9f-4c5a-a70b-3575c19d502b 500Mi RWX rook-cephfs 57d
drive-preprod-ird-nextcloud-ncdata Bound pvc-02af006f-b180-4e00-b0f9-d2792b81bdf0 1Gi RWX rook-cephfs 57d
$ k get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-02af006f-b180-4e00-b0f9-d2792b81bdf0 1Gi RWX Delete Bound sandbox/drive-preprod-xxx-nextcloud-ncdata rook-cephfs 57d
pvc-5cb1551b-8958-4c4f-8298-42c8c09ab896 1Gi RWO Retain Bound sandbox/data-drive-preprod-mariadb-galera-0 rook-ceph-block 5d22h
pvc-6ccbd327-3f9f-4c5a-a70b-3575c19d502b 500Mi RWX Delete Bound sandbox/data-drive-preprod-mariadb-tooling-backup rook-cephfs 57d
@tcoupin thanks for reporting the ticket 👏! Can you give more information by running kubectl df-pv -v trace
and then searching for the specific PVs? I'm looking at inspecting the json returned from the node that has those pods (obviously get rid of any PII information if you need to)
@tcoupin seems like you had some auth error, unrelated to this, so these logs are not helpful. Can you try and reproduce the above mentioned exact output and send the trace logs of that transaction?
It appears to me like it is an issue with the way CephFS reports inodes. See https://tracker.ceph.com/issues/24849
Might now be solved with a new ceph version.
Hi,
For some pv, the results is strange. I see the whole storage capacity for last 3 pv instead of pv capacity. The first use rook-ceph-rbd sc, and the last 3 rook-cephfs sc. Do you think it's related to df-pv or CSI ?