Closed zhucan closed 1 year ago
@louwrentius Please take a look.
Hi, thanks for your contribution, I'll take a look & test this week.
Test cmd and output:
[root@k8s-2 deeproute]# ps -ef | grep fio
root 2567542 2544066 4 10:35 pts/0 00:00:01 /bin/python3 /usr/local/bin/bench-fio --target /dev/vdb /dev/vdc --type device --iodepth 1 --numjobs 1 8 32 --mode randwrite --output TEST --b 4k 4M --destructive --direct 1
root 2567567 2567542 11 10:35 pts/0 00:00:02 fio --output-format=json --output=TEST/vdb/4k/randwrite-1-1.json /tmp/vdb-tmpjobfile.fio
root 2567568 2567542 11 10:35 pts/0 00:00:02 fio --output-format=json --output=TEST/vdc/4k/randwrite-1-1.json /tmp/vdc-tmpjobfile.fio
root 2567625 2567567 0 10:35 ? 00:00:00 fio --output-format=json --output=TEST/vdb/4k/randwrite-1-1.json /tmp/vdb-tmpjobfile.fio
root 2567628 2567568 0 10:35 ? 00:00:00 fio --output-format=json --output=TEST/vdc/4k/randwrite-1-1.json /tmp/vdc-tmpjobfile.fio
root 2570296 2569628 0 10:35 pts/1 00:00:00 grep --color=auto fio
Hello, sorry for testing later than promised. I'm testing your example right now on a HP Microserver with an SSD and a HDD as test targets. It seems to be running OK and the graphs render as usual!
I found two issues:
I think the behaviour (testing devices sequential or in parallel) should be a boolean command line option called "--parallel" making sequential testing (still) the default. Sequential benchmarking is the 'safe' option, as benchmarking high-performance devices in parallel can be skewed by CPU load depending on hardware capability. I'm willing to update the documentation, so clearly making this point but also promoting the parallel option if this risk is of no concern. So I'm only asking for the technical implementation of this option.
Your example benchmark runs 6 tests: three numjobs values and two blocksize values, for a total of six benchmarks. The default runtime of 1 minute is used so that should result in an estimation of 6 minutes, but the progress bar and duration estimate is 12 minutes (I test two devices in parallel). The progress indicator is based on the number of benchmarks. However, that number should be divided by the number of devices if those benchmarks are being executed in parallel. Maybe the "--parallel" variable can help determine if the number of benchmarks should just be used as-is or be divided by the number of devices being tested in parallel.
Would you be OK to make these changes when convenient for you and update the PR?
Thanks for your response,you are right,I will update the pr later
[root@k8s-2 fio-plot]# /usr/local/bin/bench-fio --target /dev/vdb /dev/vdc --type device --iodepth 1 --numjobs 1 8 32 --mode randwrite --output TEST --b 4k 4M --destructive --direct 1 --parallel
█████████████████████████████████████████████████
+++ FIO BENCHMARK SCRIPT +++
-------------------------------------------------
Estimated duration : 0:06:00
Number of benchmarks : 12
Test target(s) : /dev/vdb /dev/vdc
Target type : device
I/O Engine : libaio
Test mode (read/write) : randwrite
Block size : 4k 4M
IOdepth to be tested : 1
NumJobs to be tested : 1 8 32
Time duration per test (s) : 60
Benchmark loops : 1
Direct I/O : 1
Output folder : TEST
Log interval of perf data (ms): 1000
Invalidate buffer cache : 1
Allow destructive writes : True
Check remote timeout (s) : 2
Testing devices in parallel : True
-------------------------------------------------
█████████████████████████████████████████████████
100% |█████████████████████████| [0:06:14, 0:00:00]-]
@louwrentius Please review agian.
Thank you for this update! I'm testing it right now and will merge it with master later today + update pypi package.
fix: #61