louwrentius / fio-plot

Create charts from FIO storage benchmark tool output
BSD 3-Clause "New" or "Revised" License
373 stars 88 forks source link

bench-fio `--size` is overridden by the implicit `--runtime=60` #137

Open nobuto-m opened 9 months ago

nobuto-m commented 9 months ago

If --size 128G is specified, it's supposed to take some time to complete the test. However, bench-fio ends after 60 seconds due to the implicit --runtime=60(default).

# time bench-fio \
    --type device --target /dev/sda \
    --size 128G \
    --iodepth 1 --numjobs 1 --mode write -b 4M \
    --output output/test \
    --destructive
 
                   Bench-fio                    
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┓
┃ Setting                        ┃ value       ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━┩
│ Estimated Duration             │ 0:01:00     │
│ Number of benchmarks           │ 1           │
│ Test target(s)                 │ /dev/sda    │
│ Target type                    │ device      │
│ I/O Engine                     │ libaio      │
│ Test mode (read/write)         │ write       │
│ Specified test data size       │ 128G        │
│ Block size                     │ 4M          │
│ IOdepth to be tested           │ 1           │
│ NumJobs to be tested           │ 1           │
│ Time duration per test (s)     │ 60          │
│ Benchmark loops                │ 1           │
│ Direct I/O                     │ 1           │
│ Output folder                  │ output/test │
│ Log interval of perf data (ms) │ 1000        │
│ Invalidate buffer cache        │ 1           │
│ Allow destructive writes       │ True        │
│ Check remote timeout (s)       │ 2           │
└────────────────────────────────┴─────────────┘
/dev/sda ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00

real    1m0.610s
user    0m1.838s
sys 0m5.033s

A workaround is to pass --runtime 0 explicitly.

louwrentius commented 9 months ago

Thanks for your report!

You specify a device but also a size, I'm interested why you would do that, not to implicitly criticize but for me to learn.

The size parameter works, but is mostly intended for file targets, but maybe I have to find a more elegant solution to runtime=0, which at least should be documented by me for now.

You may also be interested in the --entire-device parameter, depending on your needs.

nobuto-m commented 9 months ago

The size parameter works, but is mostly intended for file targets, but maybe I have to find a more elegant solution to runtime=0, which at least should be documented by me for now.

You may also be interested in the --entire-device parameter, depending on your needs.

Yes, --entire-device worked for me as I used it in #136.

You specify a device but also a size, I'm interested why you would do that, not to implicitly criticize but for me to learn.

After knowing the characteristics of a peaky SSD behavior by running it with --entire-device, I wanted to run multiple bencmarks with different scenarios. --entire-device took two hours to complete for this specific SSD even with a sequential write scenario. To try multiple different parameters before applying it to the entire device, size bound testing was helpful for me to have quick iterations of trials.

[entire device] entire-device

[size bound 128GB out of 256GB] size_bound

nobuto-m commented 9 months ago

One advantage of size bound over time bound is that the graph can be clear on which device completed the same amount of write earlier.

In the following example, the chart can show different characters of two SSDs, but it can be clear that the orange one was way faster than the blue one in completing the same 128GB of sequential write.

multiple-devices