Closed erictarrence closed 2 years ago
After today's verification, disk space data shows that there is indeed a problem
The busybox pod /mnt/openebs-csi mount point disk space is displayed as follows
kubectl exec -it busybox-cstor-test -- df -hT Filesystem Type Size Used Available Use% Mounted on overlay overlay 10.0G 7.1G 2.9G 71% / tmpfs tmpfs 64.0M 0 64.0M 0% /dev tmpfs tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup /dev/sda xfs 1017.5M 36.6M 980.8M 4% /mnt/openebs-csi
The /mnt/openebs-csi total size is displayed as 1017.5M,but busybox‘s pvc disk space displayed as 500Mi
kubectl get pvc -A NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default cstor-busybox-pvc Bound pvc-b58fd06e-cb11-407e-a3e9-6830fc461fd6 500Mi RWO cstor-csi-disk 17h
The disk space usage cannot be displayed properly and cannot be monitored
After upgrading to openebs version 3.1, the pod still cannot accurately display the pvc disk space
The pvc size is set to 50m and it shows as 1017.5M
cat /tmp/busybox.yaml mount 50Mi PVC apiVersion: v1 kind: Pod metadata: name: busybox-cstor-test namespace: default spec: containers:
/ # df -hT | grep -E "Filesystem|openebs" Filesystem Type Size Used Available Use% Mounted on /dev/sda xfs 1017.5M 33.6M 983.9M 3% /mnt/openebs-csi
kubectl get pvc -A NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default cstor-busybox-pvc Bound pvc-46b6d6d3-079f-4f04-b0f8-a34f65c4abad 50Mi RWO cstor-csi-disk 36m
Issues go stale after 90d of inactivity. Please comment or re-open the issue if you are still interested in getting this issue fixed.
testing。。。。。
download and write data to openebs pvc
wget http://192.168.1.254/BaseOS/Packages/python3-samba-4.11.2-13.el8.x86_64.rpm -O aaaa.rpm
Then delete all openebs pvc data
rm -f *.rpm
Check it again ”kubectl get cvr“ and ”kubectl get cspi -n openebs “ The disk usage displayed is unchanged, before the RPM operation is deleted. The disk space data cannot be correctly viewed, Dare not use in production system
kubectl get cvr -n openebs |grep 6830fc461fd6 pvc-b58fd06e-cb11-407e-a3e9-6830fc461fd6-cstor-disk-pool-2kdv 51.2M 55.6M Healthy 40m pvc-b58fd06e-cb11-407e-a3e9-6830fc461fd6-cstor-disk-pool-8bd2 51.2M 55.6M Healthy 40m pvc-b58fd06e-cb11-407e-a3e9-6830fc461fd6-cstor-disk-pool-c8nl 51.2M 55.6M Healthy 40m [root@test-centos8-kvm11 ~]# kubectl get cspi -n openebs NAME HOSTNAME FREE CAPACITY READONLY PROVISIONEDREPLICAS HEALTHYREPLICAS STATUS AGE cstor-disk-pool-2kdv node1 3770M 4810M false 4 4 ONLINE 100d cstor-disk-pool-8bd2 node2 3770M 4810M false 4 4 ONLINE 100d cstor-disk-pool-c8nl node3 3770M 4810M false 4 4 ONLINE 100d
kubectl describe sc cstor-csi-disk -n openebs Name: cstor-csi-disk IsDefaultClass: No Annotations: kubectl.kubernetes.io/last-applied-configuration={"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"cstor-csi-disk"},"parameters":{"cas-type":"cstor","cstorPoolCluster":"cstor-disk-pool","fsType":"xfs","replicaCount":"3"},"provisioner":"cstor.csi.openebs.io"}
Provisioner: cstor.csi.openebs.io Parameters: cas-type=cstor,cstorPoolCluster=cstor-disk-pool,fsType=xfs,replicaCount=3 AllowVolumeExpansion: True MountOptions:
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events:
”kubectl get cvr“ and ”kubectl get cspi -n openebs “ Display disk data only increases, not decreases,Dare not use in production system
my opneebs version is 3.0.0,Does openebs version 3.2 solve this problem?