Closed slavonnet closed 3 years ago
I expect the main cause of the slowness is that almost all of your disks are very full, with less than 20GB of free space. and most of them appear to be almost entirely fragmented
2.94Tb free on poll. If i do this from remote hoost by nfs/iscsi speed wil be up to 3 GB/s (we have 40 gb rdma infiniband)
2.94Tb free on poll. If i do this from remote hoost by nfs/iscsi speed wil be up to 3 GB/s (we have 40 gb rdma infiniband)
If you look closely at the zpool list -v
output, you'll see almost all of your free space is only on sdf
and sdg
. The other disks have very little free space. So most of the writes will go to only those two disks, limiting the write throughput compared to write to all of the disks.
Are you comparing synchronous and asynchronous writes? ZFS will buffer async writes up to the dirty_data_max
and make it look like it is writing faster than it actually is.
Capture the output of zpool iostat $poolname 5
for a minute or two in both the 'fast' and 'slow' cases, and compare them.
I am not an expert and this may be unrelated to the problem you are reporting but you have very unusual ashift values from my point of view.
All your 12 Samsung NVME have ashift=9. I would assume that this is bad for the performance.
recreate to draid fix issues
Centos 5.12.13-1.el8.elrepo.x86_64 zfs 2.1.99-247_g8d5f211fc
14 SSD SPARE
Normal work;
convert from one zfs to another