Open batot1 opened 2 months ago
grep . /sys/module/{icp,zcommon,zfs}/parameters/{zfs_fletcher_4_impl,icp_aes_impl,icp_gcm_impl,zfs_vdev_raidz_impl} 2>/dev/null
I would speculate, offhand, that it's going to return something missing most of the options.
When change all disk in pool raid5sas to write back i gat write speed pool about 100-150MB/s. Still much to low that maximal write ~450MB/s
What are those write-through vs write-back? ZFS does not need disabled write caching on disks or controllers, or reliable write to media for each request. When it needs data to be stable, it explicitly requests cache flush. Make sure you have not enabled sync=always without a good reason.
@rincebrain root@pve2:~# grep . /sys/module/{icp,zcommon,zfs}/parameters/{zfs_fletcher_4_impl,icp_aes_impl,icp_gcm_impl,zfs_vdev_raidz_impl} 2>/dev/null /sys/module/zfs/parameters/zfs_fletcher_4_impl:[fastest] scalar superscalar superscalar4 sse2 ssse3 avx2 /sys/module/zfs/parameters/icp_aes_impl:cycle [fastest] generic x86_64 aesni /sys/module/zfs/parameters/icp_gcm_impl:cycle [fastest] avx generic pclmulqdq /sys/module/zfs/parameters/zfs_vdev_raidz_impl:cycle [fastest] original scalar sse2 ssse3 avx2
@amotin raid5sas sync standard local raid5sas/test sync standard inherited from raid5sas raid5sas/test-zstd6 sync standard inherited from raid5sas raid5sas/video sync standard inherited from raid5sas test sync standard default test/gry sync standard default all pools/datasets - standard
pool test - it is new pool raidz on LSI SAS2008 (3x1TB).
Reading your original post, I have no idea what you're trying to describe, other than "slow".
What performance are you describing seeing before? What performance are you seeing now? What are the exact models of the disks you're seeing this on, with what zpool create
command and test methods? What are the models of the disks that you're seeing this run fine on, and how are they attached?
If it works fine with some disks in the same machine on the same controller and not others, I would investigate what makes those disks different. How are they connected to the controller?
(Status and Action was clear but i Was testing switch off disks and checking performance zpool)
Only writing degradation becouse reading is abut 600MB/s. All disk checket smarty -t long This controler SAS2008 with this same disk week ago working property in older machine with RHEL8. Now past update is dramatic. I was reading all internet examples abut abut degradarion wrote RAIDZ to ~20MB/s. Most of these cases were damaged disks. I also checked, despite smarty claiming that the disks were ok, removing a disk from the array if it was the culprit and working on the array without one disk. Unfortunately, this does not change anything, at most it causes a slower transfer by ~8MB/s.
From 48 hours of mopich research I found that the reason for the speed degradation is the drastic degradation of IOPS on all disks equally, which can be seen in the printout. Each disk is loaded in 99% and transfer per disk ~8MB/s. I did not know whether to write to the kernel or zfs group because I do not know exactly what is the reason for the degradation. I suspect the software or the kernel. For now, I am writing to you. At the same time, I also wrote to the proxmox group so that they know about the problem.
I have not yet checked how it will behave on "pure Debian". Theoretically, it is possible. Theoretically, it is also possible to put this SAS2008 into an old machine and check its performance - but does it make any sense? I would not like to write off this SAS2008 (TI) as a loss because it 100% satisfies my needs as a SAS HBA controller.
New fact. Problem no exist if create new pool raidz on this same controler on other disk SATA - transfer write is near full write 110MB/s per disk and 220MB/s per pool/ 100% usage IOPS pool raid5sas using - write through pool test_on_sata - write back When change all disk in pool raid5sas to write back i gat write speed pool about 100-150MB/s. Still much to low that maximal write ~450MB/s Anybady can help me resolve this problem?
Type | Version/Name PROXMOX | Distribution Name | Proxmox Distribution Version | 6.2.4 Kernel Version | 6.8.4-6.8.8 (both tested) Architecture | Amd (Ryzen 7) ZFS Version | 2.2.4