Open StefanSa opened 2 years ago
The size as viewed inside the filesystem (inside the workload pod) may be very different from the size in the block level (the actual size of Longhorn volume, which is also the size of the replica folder on the host)
We have a document explaining this behavior https://longhorn.io/docs/1.2.2/volumes-and-nodes/volume-size/. Please see point # 3 (Delete data#1 from the mount point
) in the document.
Some side quesions, Why there are so many control plane nodes (9) vs the small number of worker nodes (3)? The network resource is not enough, recommending 10Gbit
The size as viewed inside the filesystem (inside the workload pod) may be very different from the size in the block level (the actual size of Longhorn volume, which is also the size of the replica folder on the host)
We have a document explaining this behavior https://longhorn.io/docs/1.2.2/volumes-and-nodes/volume-size/. Please see point # 3 (
Delete data#1 from the mount point
) in the document.
Thank you for the explanation. Is this behaviour correct? No snapshot was ever generated or deleted. @PhanLe1010 any help here ?
Some side quesions,
Why there are so many control plane nodes (9) vs the small number of worker nodes (3)?
The network resource is not enough, recommending 10Gbit
Sorry, my mistake. 3 control and 3 worker node, also 10Gbit Nic
may the same Probleme here #1555
We calculate the size by the below command.
stat /var/lib/longhorn/ -fc '{"path":"%n","fsid":"%i","type":"%T","freeBlock":%f,"totalBlock":%b,"blockSize":%s}'
Would you mind executing the command on the host? Besides that, do you know what's the file system type(ext4/xfs/btrfs...) on the host?
Hi @jenting stat /mnt/san/:
stat /mnt/san/ -fc '{"path":"%n","fsid":"%i","type":"%T","freeBlock":%f,"totalBlock":%b,"blockSize":%s}'
{"path":"/mnt/san/","fsid":"52ebbce0b07f8675","type":"ext2/ext3","freeBlock":355118669,"totalBlock":624392893,"blockSize":4096}
the filesystem on /mnt/san
is ext4
Your volume has a nominal size of 400gb
, over time the FS will write to every block.
Since there is no trim implementation, the old blocks do not get released, i.e. the data remains in them.
Same as with your physical disk when you delete a file the contents doesn't actually get removed.
If don't require 400GB max size, than consider using an appropriately sized volume, for your workload. You can always expand the size of a PVC after the fact.
hi @joshimoo @jenting thanks for your explanations, we understood that as long as longhorn doesn't provide trim support or anything like that, we will use our "delete-heavy workloads" with openebs "local pv".
what is the solution ?
We are investigating trimming volume. The effort is tracked at https://github.com/longhorn/longhorn/issues/836
Describe the bug
I don't understand the current size of 159Gi displayed in the WebUI, in fact only 9.9Gi is used in the pod. In the mounted pvc you can only see the head image, but no snapshot. For info, there is only one replica. Pod and mounted pvc are on different nodes
Volume Details
mounted pvc folder:
df -h in the one affected pod
WebUI, no snaphot here also not with "Show System Hidden"
Expected behavior
Not so big difference between current size and actual size without snapshot.
Log or Support bundle
If applicable, add the Longhorn managers' log or support bundle when the issue happens. You can generate a Support Bundle using the link at the footer of the Longhorn UI.
Environment
Additional context
Add any other context about the problem here.