Closed mabod closed 4 years ago
There is optimization work underway for the AES-GCM case in #9749 which should help considerably.
@behlendorf Thanks for the link, I really hope something will happen to improve this.
@mabod I feel your pain. I moved a small and underpowered NAS from freenas to Debian and at the same time kept using ZFS and the encryption speed went from acceptable to horrendous. (I was using GELI encryption on freenas). So now I upgraded the whole NAS to a new build with an AMD Ryzen 5 3600 and all cores are at 100% just by copying a file to an encrypted dataset - I'm torn between laughing and crying :-/
Today I tested zfs native encryption again. I used zfs master v0.8.0-753_g6ed4391da
I am very happy with the results. Native encryption performance has significantly improved. I tested without compression on my Samsung SSD 970 EVO Plus 1TB.
The fio results are as follows:
without encryption. 4 runs gives average read 3430 MB/s and write 1688 MB/s:
read: IOPS=3245, BW=3246MiB/s (3403MB/s)(32.0GiB/10096msec)
write: IOPS=1659, BW=1659MiB/s (1740MB/s)(32.0GiB/19751msec); 0 zone resets
read: IOPS=3147, BW=3148MiB/s (3301MB/s)(32.0GiB/10410msec)
write: IOPS=1577, BW=1578MiB/s (1654MB/s)(32.0GiB/20768msec); 0 zone resets
read: IOPS=3330, BW=3330MiB/s (3492MB/s)(32.0GiB/9839msec)
write: IOPS=1620, BW=1621MiB/s (1699MB/s)(32.0GiB/20220msec); 0 zone resets
read: IOPS=3361, BW=3361MiB/s (3524MB/s)(32.0GiB/9749msec)
write: IOPS=1588, BW=1589MiB/s (1666MB/s)(32.0GiB/20624msec); 0 zone resets
with encryption enabled (using default aes-256-gcm) 4 runs gives average read 2882 MB/s and write 1690 MB/s:
read: IOPS=2792, BW=2792MiB/s (2928MB/s)(32.0GiB/11736msec)
write: IOPS=1723, BW=1724MiB/s (1807MB/s)(32.0GiB/19010msec); 0 zone resets
read: IOPS=2750, BW=2751MiB/s (2884MB/s)(32.0GiB/11913msec)
write: IOPS=1619, BW=1619MiB/s (1698MB/s)(32.0GiB/20234msec); 0 zone resets
read: IOPS=2781, BW=2782MiB/s (2917MB/s)(32.0GiB/11780msec)
write: IOPS=1551, BW=1552MiB/s (1627MB/s)(32.0GiB/21117msec); 0 zone resets
read: IOPS=2667, BW=2668MiB/s (2797MB/s)(32.0GiB/12284msec)
write: IOPS=1553, BW=1554MiB/s (1629MB/s)(32.0GiB/21087msec); 0 zone resets
There is basically no difference in the write speed. And read performance with encryption is at 84 %. That is not bad! Thank you developers for your continuous efforts!
Today I tested zfs native encryption again. I used zfs master v0.8.0-753_g6ed4391da
I am very happy with the results. Native encryption performance has significantly improved. I tested without compression on my Samsung SSD 970 EVO Plus 1TB.
The fio results are as follows:
without encryption. 4 runs gives average read 3430 MB/s and write 1688 MB/s:
read: IOPS=3245, BW=3246MiB/s (3403MB/s)(32.0GiB/10096msec) write: IOPS=1659, BW=1659MiB/s (1740MB/s)(32.0GiB/19751msec); 0 zone resets read: IOPS=3147, BW=3148MiB/s (3301MB/s)(32.0GiB/10410msec) write: IOPS=1577, BW=1578MiB/s (1654MB/s)(32.0GiB/20768msec); 0 zone resets read: IOPS=3330, BW=3330MiB/s (3492MB/s)(32.0GiB/9839msec) write: IOPS=1620, BW=1621MiB/s (1699MB/s)(32.0GiB/20220msec); 0 zone resets read: IOPS=3361, BW=3361MiB/s (3524MB/s)(32.0GiB/9749msec) write: IOPS=1588, BW=1589MiB/s (1666MB/s)(32.0GiB/20624msec); 0 zone resets
with encryption enabled (using default aes-256-gcm) 4 runs gives average read 2882 MB/s and write 1690 MB/s:
read: IOPS=2792, BW=2792MiB/s (2928MB/s)(32.0GiB/11736msec) write: IOPS=1723, BW=1724MiB/s (1807MB/s)(32.0GiB/19010msec); 0 zone resets read: IOPS=2750, BW=2751MiB/s (2884MB/s)(32.0GiB/11913msec) write: IOPS=1619, BW=1619MiB/s (1698MB/s)(32.0GiB/20234msec); 0 zone resets read: IOPS=2781, BW=2782MiB/s (2917MB/s)(32.0GiB/11780msec) write: IOPS=1551, BW=1552MiB/s (1627MB/s)(32.0GiB/21117msec); 0 zone resets read: IOPS=2667, BW=2668MiB/s (2797MB/s)(32.0GiB/12284msec) write: IOPS=1553, BW=1554MiB/s (1629MB/s)(32.0GiB/21087msec); 0 zone resets
There is basically no difference in the write speed. And read performance with encryption is at 84 %. That is not bad! Thank you developers for your continuous efforts!
What is your CPU and mem speed when you did this test? Thanks.
I am testing on a Ryzen 7 3700x with 64 GB DDR4 RAM with 3200 MHz.
Here are my most recent test results with
kernel 5.8.14
Samsung SSD 970 EVO Plus 1TB (NVME)
zfs 0.8.5
fio numjobs=320 and size=200M
The following numbers are reproducible with little variation up and down.
no encryption:
Run status group 0 (all jobs):
READ: bw=5047MiB/s (5292MB/s), 15.8MiB/s-104MiB/s (16.5MB/s-109MB/s), io=62.5GiB (67.1GB), run=1918-12680msec
--
Run status group 0 (all jobs):
WRITE: bw=1478MiB/s (1549MB/s), 4728KiB/s-5705KiB/s (4842kB/s-5842kB/s), io=62.5GiB (67.1GB), run=35896-43314msec
with encryption:
Run status group 0 (all jobs):
READ: bw=2692MiB/s (2823MB/s), 8615KiB/s-274MiB/s (8822kB/s-288MB/s), io=62.5GiB (67.1GB), run=729-23772msec
--
Run status group 0 (all jobs):
WRITE: bw=1472MiB/s (1543MB/s), 4709KiB/s-5951KiB/s (4822kB/s-6094kB/s), io=62.5GiB (67.1GB), run=34413-43491msec
That does not look too bad. In fact I am not complaining anymore.
PS The difference to my earlier fio benchmarks is, that the older values where done with 32 GB of RAM and numjobs=1 and size=32GB
@mabod looks like this issue may be closed then.
yes. The performance is a lot better.
System information
Describe the problem you're observing
With native encryption enabled the sequential read speed is dropping to ca. 20 % of the speed I see without encryption. The seq. write speed is not so much impacted. It stays at ca. 80 %.
And at the same time CPU load is going up to almost 100 %.
The encryption algorithm does not make a big difference. I tested with aes-128-gcm, aes-128-ccm and aes-256-ccm
I tested on a M2 SSD Samsung 970 EVO Plus 500GB
Describe how to reproduce the problem
fio benchmark seq. read and seq. write with these option files
This is the fio result I see without encryption (just one representative example out of many tries):
While this is the result with encryption (just one representative example out of many tries):
I also tested with kernel 4.19.91. The performance is slightly better, the CPU load is also slightly better compared to kernel 5.4.6 but the GUI is stuttering (XFCE). If I move a window while fio is running, the window stops moving for a fraction of a second every other second. This does not happen with kernel 5.4.6.
fio output with kernel 4.19.91:
zfs parameters of the dataset: