omniosorg / illumos-omnios

Community developed and maintained version of the OS/Net consolidation
https://omnios.org
167 stars 79 forks source link

Write performance regression compared to r151014 ? #225

Closed linuxsuite closed 4 years ago

linuxsuite commented 6 years ago

Howdy!

    I have about 20 machines built on this image circa 2015?

    OmniOS_Text_r151014.usb-dd

    I just noticed that OminOS is being continued as omniosce.org.
    I downloaded the latest install image and did some testing,
    and I got about 1/3 - 1/2 the write performance I was expecting on a simple RAIDZ setup.

    I installed fresh images  from omnios.omniti.com and omniosce.org based on r22

    r151022.usb-dd

 as well as the r14 image above.  

   I get about 1/3 - 1/2 the write performance with r22 compared to r14. 
   It is  a simple write test using dd and measuring performance with zpool iostat.

  Hardware and zpool is identical for each test. I simply swapped out the boot disks
  and booted a different image.

 The hardware is simple..  DELL R710 with LSI-SAS9201-16e

   Thoughts?

  I would like to help resolve this if it interests anyone.

  r14 suits my purpose well enough... but this issue must affect others...

     -steve
citrus-it commented 6 years ago

Adding more information from OP:

Using this test

dd if=/dev/zero of=/data/B-203/testfile bs=16384 count=10000000&

output of

zpool iostat -v B-203 3

   on r151014 updated

                                 capacity     operations    bandwidth
pool                         alloc   free   read  write   read  write
---------------------------  -----  -----  -----  -----  -----  -----
B-203                        45.0G  72.0T      0  4.71K  1.20K   595M
  raidz1                     45.0G  72.0T      0  4.71K  1.20K   595M
    c0t5000C500A617A737d0s0      -      -      0    715      0  77.8M
    c0t5000C500A617B933d0s0      -      -      0    637  1.20K  69.2M
    c0t5000C500A6144D33d0s0      -      -      0    635      0  68.9M
    c0t5000C500A6157F73d0s0      -      -      0    637      0  69.2M
    c0t5000C500A6159A27d0s0      -      -      0    752      0  82.0M
    c0t5000C500A6142257d0s0      -      -      0    804      0  87.8M
    c0t5000C500A6145337d0s0      -      -      0    812      0  88.6M
    c0t5000C500A6159073d0s0      -      -      0    637      0  69.2M
---------------------------  -----  -----  -----  -----  -----  -----

                                capacity     operations    bandwidth
pool                         alloc   free   read  write   read  write
---------------------------  -----  -----  -----  -----  -----  -----
B-203                        48.6G  72.0T      0  4.47K  1.47K   567M
  raidz1                     48.6G  72.0T      0  4.47K  1.47K   567M
    c0t5000C500A617A737d0s0      -      -      0    769      0  84.2M
    c0t5000C500A617B933d0s0      -      -      0    770  1.47K  84.3M
    c0t5000C500A6144D33d0s0      -      -      0    771      0  84.4M
    c0t5000C500A6157F73d0s0      -      -      0    767      0  83.9M
    c0t5000C500A6159A27d0s0      -      -      0    772      0  84.6M
    c0t5000C500A6142257d0s0      -      -      0    769      0  84.2M
    c0t5000C500A6145337d0s0      -      -      0    769      0  84.2M
    c0t5000C500A6159073d0s0      -      -      0    769      0  84.2M
                                capacity     operations    bandwidth
pool                         alloc   free   read  write   read  write
---------------------------  -----  -----  -----  -----  -----  -----
B-203                        5.12G  72.0T      0    324      0  39.4M
  raidz1                     5.12G  72.0T      0    324      0  39.4M
    c0t5000C500A617A737d0s0      -      -      0    139      0  5.90M
    c0t5000C500A617B933d0s0      -      -      0    142      0  5.71M
    c0t5000C500A6144D33d0s0      -      -      0    139      0  6.01M
    c0t5000C500A6157F73d0s0      -      -      0    150      0  5.65M
    c0t5000C500A6159A27d0s0      -      -      0    128      0  5.79M
    c0t5000C500A6142257d0s0      -      -      0    142      0  5.65M
    c0t5000C500A6145337d0s0      -      -      0    144      0  5.78M
    c0t5000C500A6159073d0s0      -      -      0    143      0  5.79M
---------------------------  -----  -----  -----  -----  -----  -----

    ON Community Edition

                                capacity     operations    bandwidth
pool                         alloc   free   read  write   read  write
---------------------------  -----  -----  -----  -----  -----  -----
B-203                        5.12G  72.0T      0    288      0  36.1M
  raidz1                     5.12G  72.0T      0    288      0  36.1M
    c0t5000C500A617A737d0s0      -      -      0    151      0  5.34M
    c0t5000C500A617B933d0s0      -      -      0    135      0  5.42M
    c0t5000C500A6144D33d0s0      -      -      0    166      0  5.30M
    c0t5000C500A6157F73d0s0      -      -      0    139      0  5.40M
    c0t5000C500A6159A27d0s0      -      -      0    141      0  5.45M
    c0t5000C500A6142257d0s0      -      -      0    131      0  5.42M
    c0t5000C500A6145337d0s0      -      -      0    153      0  5.30M
    c0t5000C500A6159073d0s0      -      -      0    131      0  5.42M
citrus-it commented 6 years ago

@linuxsuite - what is the compression setting on your pool? If it is just on then the meaning of that changed and the difference in how much work the disks are having to do when writing zeros could be down to that.

Can you do some testing with fio or iozone to see what the actual throughput from the application perspective is?

linuxsuite commented 6 years ago
compression ratio is 1.00x

see below. Someone suggested that new kernel is optimized for multiple write streams and a single thread may perform not as well....

steve@live-dfs-110:/home/steve$ zfs get all B-203 NAME PROPERTY VALUE SOURCE B-203 type filesystem - B-203 creation Wed May 30 11:51 2018 - B-203 used 15.1T - B-203 available 43.6T - B-203 referenced 15.1T - B-203 compressratio 1.00x - B-203 mounted yes - B-203 quota none default B-203 reservation none default B-203 recordsize 128K default B-203 mountpoint /data/B-203 local B-203 sharenfs off default B-203 checksum on default B-203 compression off default B-203 atime off local B-203 devices off local B-203 exec off local B-203 setuid off local B-203 readonly off local B-203 zoned off default B-203 snapdir hidden default B-203 aclmode discard default B-203 aclinherit restricted default B-203 canmount on default B-203 xattr on default B-203 copies 1 default B-203 version 5 - B-203 utf8only off - B-203 normalization none - B-203 casesensitivity sensitive - B-203 vscan off default B-203 nbmand off default B-203 sharesmb off default B-203 refquota none default B-203 refreservation none default B-203 primarycache all default B-203 secondarycache all default B-203 usedbysnapshots 0 - B-203 usedbydataset 15.1T - B-203 usedbychildren 166M - B-203 usedbyrefreservation 0 - B-203 logbias latency default B-203 dedup off default B-203 mlslabel none default B-203 sync standard local B-203 refcompressratio 1.00x - B-203 written 15.1T - B-203 logicalused 15.1T - B-203 logicalreferenced 15.1T - B-203 filesystem_limit none default B-203 snapshot_limit none default B-203 filesystem_count none default B-203 snapshot_count none default B-203 redundant_metadata all default

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.