srikanth007m / stressapptest

Automatically exported from code.google.com/p/stressapptest
Apache License 2.0
0 stars 0 forks source link

disk stats are not printed #29

Open GoogleCodeExporter opened 8 years ago

GoogleCodeExporter commented 8 years ago
What steps will reproduce the problem?
1. I have tried various options for running the Disk tests like below but the 
stats is not printing the disk and data check performance numbers after the 
test runs.

./stressapptest -d /dev/sdb -s 10 -v 20 --destructive

./stressapptest -d /dev/sdb -s 10 -v 20 --destructive --findfiles
./stressapptest -d /dev/sda1 --write-block-size 4096 -v 20 -s 10 --destructive 
--cache-size 54mb --write-threshold 200000
./stressapptest -d /dev/sda1 --write-block-size 4096 -v 20 -s 10 --destructive 
--cache-size 64mb --write-threshold 200000 --findfiles
./stressapptest -d /dev/sda1 --write-block-size 4096 -s 10 --destructive 
--cache-size 64mb --write-threshold 200000
./stressapptest -f /mnt/sda1/file2 -f /mnt/sda2/file1
./stressapptest -f /mnt/sda1/file2 -f /mnt/sda2/file1 --filesize 20m 
--read-block-size 1024
./stressapptest -d /dev/sda1 -d /dev/sda2
./stressapptest -d /dev/sda
./stressapptest -d /mnt/sda1 --read-block-size 1024
./stressapptest -d /dev/sda1 --read-block-size 1024
./stressapptest -d /dev/sda1 --read-block-size 1024 --verbose 20
./stressapptest -d /dev/sda1 --read-block-size 1024 -v 20
./stressapptest -d /dev/sda1 -f /mnt/sda1/file3 --read-block-size 1024 -v 20
./stressapptest -d /dev/sda1 -f /mnt/sda1/file3 --write-block-size 4096 -v 20 
-s 10
./stressapptest -d /dev/sda1 -f /mnt/sda1/file3 --write-block-size 4096 -v 20 
-s 10 --destructive --cache-size 54mb
./stressapptest -d /dev/sda1 -f /mnt/sda1/file3 --write-block-size 4096 -v 20 
-s 10 --destructive --cache-size 54mb --write-threshold 200000 use
./stressapptest -d /dev/sda1 -f /mnt/sda1/file3 --write-block-size 4096 -v 20 
-s 10 --destructive --cache-size 54mb --write-threshold 200000 usec
./stressapptest -d /dev/sda1 -f /mnt/sda1/file3 --write-block-size 4096 -v 20 
-s 10 --destructive --cache-size 54mb --write-threshold 200000

What is the expected output? What do you see instead?
The disk stats & data check should print some number 

Stats: Found 0 hardware incidents
Stats: Completed: 72356.00M in 10.00s 7235.11MB/s, with 0 hardware incidents, 0 
errors
Stats: Memory Copy: 72356.00M at 7235.34MB/s
Stats: File Copy: 0.00M at 0.00MB/s
Stats: Net Copy: 0.00M at 0.00MB/s
Stats: Data Check: 0.00M at 0.00MB/s
Stats: Invert Data: 0.00M at 0.00MB/s
Stats: Disk: 0.00M at 0.00MB/s

What version of the product are you using? On what operating system?
stressapptest-1.0.6_autoconf on ubuntu 
Linux host2 2.6.32-24-generic #39-Ubuntu SMP Wed Jul 28 05:14:15 UTC 2010 
x86_64 GNU/Linux
Ubuntu 10.04.1 LTS

Please provide any additional information below.

Please let me know how can we get the disk and data check numbers. I have tried 
a lot of options in the 1.0.6 build.

Original issue reported on code.google.com by san.pand...@gmail.com on 1 Oct 2013 at 7:00

GoogleCodeExporter commented 8 years ago
There's a few issues here: 

"Data check" refers to read-only check threads, from the "-c" argument. 
stressapptest checks all data so it's not necessary to use this argument.

You must not use "-d /dev/sda1" and "-f /mnt/sda1/file" at the same time as 
they each require exclusive access to the referenced disk region. "-d" will 
overwrite the filesystem on sda1 and conflict with the file used by "-f".

However, "-f" should cause a nonzero number in "File copy" and "-d" should 
cause a nonzero number in "Disk". I'll run some of your command lines and see 
if I can reproduce.

Original comment by nsand...@chromium.org on 1 Oct 2013 at 7:22

GoogleCodeExporter commented 8 years ago
Hi Nick,
Thanks for the quick replies :).

In this example below I am using the -d option with -c below. No Data check or 
disk numbers are printed.

root@host2:/home/strm/stressapptest-1.0.6_autoconf/src# ./stressapptest -d 
/dev/sdb -s 10 -c
Log: Commandline - ./stressapptest -d /dev/sdb -s 10 -c
Stats: SAT revision 1.0.6_autoconf, 64 bit binary
Log: root @ pnqlab032 on Fri Aug 16 11:39:09 IST 2013 from open source release
Log: 1 nodes, 2 cpus.
Log: Defaulting to 2 copy threads
Log: Total 3955 MB. Free 3410 MB. Hugepages 0 MB. Targeting 3566 MB (90%)
Log: Prefer plain malloc memory allocation.
Log: Using memaligned allocation at 0x7f828e66b000.
Stats: Starting SAT, 3566M, 10 seconds
Log: region number 6 exceeds region count 6
Log: Region mask: 0x3f
Stats: Found 0 hardware incidents
Stats: Completed: 17470.00M in 10.04s 1739.53MB/s, with 0 hardware incidents, 0 
errors
Stats: Memory Copy: 17470.00M at 1743.36MB/s
Stats: File Copy: 0.00M at 0.00MB/s
Stats: Net Copy: 0.00M at 0.00MB/s
Stats: Data Check: 0.00M at 0.00MB/s
Stats: Invert Data: 0.00M at 0.00MB/s
Stats: Disk: 0.00M at 0.00MB/s

Status: PASS - please verify no corrected errors

root@host2:/home/strm/stressapptest-1.0.6_autoconf/src#

Original comment by san.pand...@gmail.com on 1 Oct 2013 at 9:55

GoogleCodeExporter commented 8 years ago
"-c" requires an integer number of threads to run, such as "-c 4"

Original comment by nick.j.s...@gmail.com on 1 Oct 2013 at 5:53

GoogleCodeExporter commented 8 years ago
Thanks Nick, The data check numbers are getting populated now. Only the Disk 
ones are remaining.

Stats: Completed: 99447.00M in 10.20s 9747.40MB/s, with 0 hardware incidents, 0 
errors
Stats: Memory Copy: 31424.00M at 3144.77MB/s
Stats: File Copy: 0.00M at 0.00MB/s
Stats: Net Copy: 0.00M at 0.00MB/s
Stats: Data Check: 68023.00M at 6672.19MB/s
Stats: Invert Data: 0.00M at 0.00MB/s
Stats: Disk: 0.00M at 0.00MB/s

Original comment by san.pand...@gmail.com on 3 Oct 2013 at 7:34

GoogleCodeExporter commented 8 years ago
Hello~
I have one question, I have try to run my command:
#stressapptest -s 172800 -M 1024 -m 8 -i 8 -C 8 -W -n 127.0.0.1 --listen  -f 
/tmp/file1 -f /tmp/file2 -d /dev/sdb -c 4 -l  /var/log/stressapptest.log

But I still cannot get Stats:Disk result.
How can I get the Disk result?

Original comment by lisou...@gmail.com on 14 Nov 2013 at 9:54

Attachments: