Closed linuxsuite closed 4 years ago
Adding more information from OP:
Using this test
dd if=/dev/zero of=/data/B-203/testfile bs=16384 count=10000000&
output of
zpool iostat -v B-203 3
on r151014 updated
capacity operations bandwidth
pool alloc free read write read write
--------------------------- ----- ----- ----- ----- ----- -----
B-203 45.0G 72.0T 0 4.71K 1.20K 595M
raidz1 45.0G 72.0T 0 4.71K 1.20K 595M
c0t5000C500A617A737d0s0 - - 0 715 0 77.8M
c0t5000C500A617B933d0s0 - - 0 637 1.20K 69.2M
c0t5000C500A6144D33d0s0 - - 0 635 0 68.9M
c0t5000C500A6157F73d0s0 - - 0 637 0 69.2M
c0t5000C500A6159A27d0s0 - - 0 752 0 82.0M
c0t5000C500A6142257d0s0 - - 0 804 0 87.8M
c0t5000C500A6145337d0s0 - - 0 812 0 88.6M
c0t5000C500A6159073d0s0 - - 0 637 0 69.2M
--------------------------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
--------------------------- ----- ----- ----- ----- ----- -----
B-203 48.6G 72.0T 0 4.47K 1.47K 567M
raidz1 48.6G 72.0T 0 4.47K 1.47K 567M
c0t5000C500A617A737d0s0 - - 0 769 0 84.2M
c0t5000C500A617B933d0s0 - - 0 770 1.47K 84.3M
c0t5000C500A6144D33d0s0 - - 0 771 0 84.4M
c0t5000C500A6157F73d0s0 - - 0 767 0 83.9M
c0t5000C500A6159A27d0s0 - - 0 772 0 84.6M
c0t5000C500A6142257d0s0 - - 0 769 0 84.2M
c0t5000C500A6145337d0s0 - - 0 769 0 84.2M
c0t5000C500A6159073d0s0 - - 0 769 0 84.2M
capacity operations bandwidth
pool alloc free read write read write
--------------------------- ----- ----- ----- ----- ----- -----
B-203 5.12G 72.0T 0 324 0 39.4M
raidz1 5.12G 72.0T 0 324 0 39.4M
c0t5000C500A617A737d0s0 - - 0 139 0 5.90M
c0t5000C500A617B933d0s0 - - 0 142 0 5.71M
c0t5000C500A6144D33d0s0 - - 0 139 0 6.01M
c0t5000C500A6157F73d0s0 - - 0 150 0 5.65M
c0t5000C500A6159A27d0s0 - - 0 128 0 5.79M
c0t5000C500A6142257d0s0 - - 0 142 0 5.65M
c0t5000C500A6145337d0s0 - - 0 144 0 5.78M
c0t5000C500A6159073d0s0 - - 0 143 0 5.79M
--------------------------- ----- ----- ----- ----- ----- -----
ON Community Edition
capacity operations bandwidth
pool alloc free read write read write
--------------------------- ----- ----- ----- ----- ----- -----
B-203 5.12G 72.0T 0 288 0 36.1M
raidz1 5.12G 72.0T 0 288 0 36.1M
c0t5000C500A617A737d0s0 - - 0 151 0 5.34M
c0t5000C500A617B933d0s0 - - 0 135 0 5.42M
c0t5000C500A6144D33d0s0 - - 0 166 0 5.30M
c0t5000C500A6157F73d0s0 - - 0 139 0 5.40M
c0t5000C500A6159A27d0s0 - - 0 141 0 5.45M
c0t5000C500A6142257d0s0 - - 0 131 0 5.42M
c0t5000C500A6145337d0s0 - - 0 153 0 5.30M
c0t5000C500A6159073d0s0 - - 0 131 0 5.42M
@linuxsuite - what is the compression setting on your pool? If it is just on
then the meaning of that changed and the difference in how much work the disks are having to do when writing zeros could be down to that.
Can you do some testing with fio
or iozone
to see what the actual throughput from the application perspective is?
compression ratio is 1.00x
see below. Someone suggested that new kernel is optimized for multiple write streams and a single thread may perform not as well....
steve@live-dfs-110:/home/steve$ zfs get all B-203 NAME PROPERTY VALUE SOURCE B-203 type filesystem - B-203 creation Wed May 30 11:51 2018 - B-203 used 15.1T - B-203 available 43.6T - B-203 referenced 15.1T - B-203 compressratio 1.00x - B-203 mounted yes - B-203 quota none default B-203 reservation none default B-203 recordsize 128K default B-203 mountpoint /data/B-203 local B-203 sharenfs off default B-203 checksum on default B-203 compression off default B-203 atime off local B-203 devices off local B-203 exec off local B-203 setuid off local B-203 readonly off local B-203 zoned off default B-203 snapdir hidden default B-203 aclmode discard default B-203 aclinherit restricted default B-203 canmount on default B-203 xattr on default B-203 copies 1 default B-203 version 5 - B-203 utf8only off - B-203 normalization none - B-203 casesensitivity sensitive - B-203 vscan off default B-203 nbmand off default B-203 sharesmb off default B-203 refquota none default B-203 refreservation none default B-203 primarycache all default B-203 secondarycache all default B-203 usedbysnapshots 0 - B-203 usedbydataset 15.1T - B-203 usedbychildren 166M - B-203 usedbyrefreservation 0 - B-203 logbias latency default B-203 dedup off default B-203 mlslabel none default B-203 sync standard local B-203 refcompressratio 1.00x - B-203 written 15.1T - B-203 logicalused 15.1T - B-203 logicalreferenced 15.1T - B-203 filesystem_limit none default B-203 snapshot_limit none default B-203 filesystem_count none default B-203 snapshot_count none default B-203 redundant_metadata all default
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Howdy!