zfs-linux / zfs

Native ZFS for Linux
http://wiki.github.com/behlendorf/zfs/
Other
56 stars 2 forks source link

high value of load average #106

Open witalis opened 13 years ago

witalis commented 13 years ago

hello,

in my test env after loading zfs modules i've got high load average:

18:10:48 up 16 min, 2 users, load average: 4.06, 3.89, 2.61

this machine is not really loaded and vmstat constantly shows

procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----

r b swpd free buff cache si so bi bo in cs us sy id wa st

0 0 0 1483848 22540 246936 0 0 133 10 49 66 0 1 97 1 0

0 0 0 1483848 22540 246936 0 0 0 0 254 492 0 0 100 0 0

if i make some i/o load on zfs vmstat is not indicating it in ,,wa'' field

os: fedora 14 x86_64 2.6.35.10-74.fc14.x86_64

dward commented 13 years ago

I am experiencing the same issue. The load on an idle machine with the tagged version 'GA-01.v02' is around 2. It almost feels artificial, as the performance of the machine does not seem to be affected. If i unload ZFS, the load goes away:

Kernel: Redhat Enterprise 5 (2.6.32-100.24.1 - Oracle Kernel)

ZFS modules loaded and storage mounted (load average: 2.12, 2.13, 2.09):
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  0      0 26217352 240788 396068    0    0     1    17   22   25  2  1 97  0  0
 1  0      0 26218400 240788 396076    0    0     0     0 15806 14916  1  1 99  0  0
 2  0      0 26218932 240788 396080    0    0     0   392 14973 12501  0  0 100  0  0
 0  0      0 26218396 240788 396088    0    0     0    24 14391 13497  1  0 99  0  0
 2  0      0 26219056 240788 396088    0    0     0     0 10794 11652  1  1 99  0  0
 0  0      0 26219064 240788 396100    0    0     0     0 12917 11871  2  1 97  0  0
 0  0      0 26220328 240788 396108    0    0     0     0 14227 13338  1  1 98  0  0
 1  0      0 26220344 240792 396116    0    0     0   368 16196 13159  1  0 99  0  

ZFS unloaded (load average: 0.58, 1.65, 1.92)
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  0      0 26388424 241092 396664    0    0     1    17   26   30  2  1 97  0  0
 1  0      0 26388288 241092 396664    0    0     0    32 18796 20148  1  1 98  0  0
 0  0      0 26388288 241092 396664    0    0     0    12 17784 20190  1  1 98  0  0
 1  0      0 26388288 241092 396664    0    0     0    20 17998 20335  0  1 99  0  0
 1  0      0 26388288 241092 396664    0    0     0     0 17142 20215  1  1 98  0  0
 0  0      0 26388292 241092 396664    0    0     0     0 17340 20185  0  0 99  0  0
 0  0      0 26388360 241092 396664    0    0     0    64 17535 19758  1  1 99  0  0

CPU usage and context switching seem to be similar, so i am unsure what is creating the load. Let me know if there is any more information I could provide.

dapperfu commented 13 years ago

Same here:

server:~# uptime 11:48:27 up 18:39, 6 users, load average: 21.91, 20.68, 16.06

This is just trying to move some files from one pool to another. (I don't hear the hard drives making any noise.) For some reason IO seems VERY slow, even though a dd if write of 1G blazed along at 62MB/s.

I idle around 8-9 load.