Closed yygcode closed 3 months ago
Please acknowledge the following before creating a ticket
Description of the bug: Fio iolog replay does not report disk stats corrrectly, and the utils always be zero. And It can be fixed by the patch:
utils
$ git diff diff --git a/iolog.c b/iolog.c index f52a9a80..c065e421 100644 --- a/iolog.c +++ b/iolog.c @@ -812,6 +812,8 @@ bool init_iolog(struct thread_data *td) if (!ret) td_verror(td, EINVAL, "failed initializing iolog"); + init_disk_util(td); + return ret; }
Environment:
[yanyg@x1{192.168.50.132} ~/git/fio ] <2024-03-19 23:43:15> $ uname -a Linux x1 6.3.0 #1 SMP PREEMPT_DYNAMIC Mon May 1 13:09:40 CST 2023 x86_64 GNU/Linux [yanyg@x1{192.168.50.132} ~/git/fio ] <2024-03-19 23:43:36>
fio version:
$ ./build/fio --version fio-3.36 [yanyg@x1{192.168.50.132} ~/git/fio ] <2024-03-19 23:43:54> $ git log --oneline -1 7d6c99e9 (HEAD -> master, origin/master, origin/HEAD) docs: fix documentation for rate_cycle
Reproduction steps
Step1: Generates iolog
#! /bin/bash gen_iolog() { local dev="./test.file" local val="" echo "fio version 2 iolog" echo "$dev add" echo "$dev open" SIZE=$((1024 * 1024)) val=$(for ((x = 0; x < SIZE; x += 512)); do echo "$dev write $x 512" done) val="$val $val $val $val" echo "$val" echo "$dev close" } gen_iolog > test.iolog
Write above scripts to gen_iolog.sh and run it.
Step2: Write config
$ cat test.cfg ; TEST IOLOG [global] direct=1 filename=./test.file filesize=128M ioengine=libaio [job1] rw=write iodepth=1 runtime=10 read_iolog=./test.iolog
Step3: Run fio
$ sudo ./build/fio_orig test.cfg job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 fio-3.36 Starting 1 process job1: (groupid=0, jobs=1): err= 0: pid=26427: Wed Mar 20 00:00:51 2024 write: IOPS=54.3k, BW=26.5MiB/s (27.8MB/s)(4096KiB/151msec); 0 zone resets slat (usec): min=12, max=1831, avg=16.55, stdev=23.09 clat (nsec): min=936, max=64511, avg=1402.97, stdev=1053.29 lat (usec): min=13, max=1842, avg=17.96, stdev=23.34 clat percentiles (nsec): | 1.00th=[ 948], 5.00th=[ 956], 10.00th=[ 956], 20.00th=[ 1012], | 30.00th=[ 1020], 40.00th=[ 1020], 50.00th=[ 1032], 60.00th=[ 1080], | 70.00th=[ 1144], 80.00th=[ 1896], 90.00th=[ 2512], 95.00th=[ 2800], | 99.00th=[ 2928], 99.50th=[ 3600], 99.90th=[ 6560], 99.95th=[14272], | 99.99th=[64256] lat (nsec) : 1000=15.58% lat (usec) : 2=65.34%, 4=18.75%, 10=0.26%, 20=0.04%, 50=0.02% lat (usec) : 100=0.01% cpu : usr=18.00%, sys=26.67%, ctx=8198, majf=0, minf=17 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): WRITE: bw=26.5MiB/s (27.8MB/s), 26.5MiB/s-26.5MiB/s (27.8MB/s-27.8MB/s), io=4096KiB (4194kB), run=151-151msec Disk stats (read/write): nvme0n1: ios=0/0, sectors=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
After fix with the patch in Description of the bug:
$ sudo ./build/fio_fixed test.cfg job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 fio-3.36 Starting 1 process job1: (groupid=0, jobs=1): err= 0: pid=27111: Wed Mar 20 00:01:44 2024 write: IOPS=58.5k, BW=28.6MiB/s (30.0MB/s)(4096KiB/140msec); 0 zone resets slat (usec): min=12, max=1764, avg=15.54, stdev=19.98 clat (nsec): min=936, max=20875, avg=1206.88, stdev=441.82 lat (usec): min=13, max=1774, avg=16.74, stdev=20.11 clat percentiles (nsec): | 1.00th=[ 948], 5.00th=[ 964], 10.00th=[ 996], 20.00th=[ 1012], | 30.00th=[ 1048], 40.00th=[ 1064], 50.00th=[ 1064], 60.00th=[ 1080], | 70.00th=[ 1176], 80.00th=[ 1352], 90.00th=[ 1512], 95.00th=[ 1816], | 99.00th=[ 2736], 99.50th=[ 3120], 99.90th=[ 5024], 99.95th=[ 7072], | 99.99th=[20864] lat (nsec) : 1000=10.99% lat (usec) : 2=84.46%, 4=4.35%, 10=0.20%, 50=0.01% cpu : usr=12.95%, sys=29.50%, ctx=8197, majf=0, minf=18 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): WRITE: bw=28.6MiB/s (30.0MB/s), 28.6MiB/s-28.6MiB/s (30.0MB/s-30.0MB/s), io=4096KiB (4194kB), run=140-140msec Disk stats (read/write): nvme0n1: ios=0/7795, sectors=0/7795, merge=0/0, ticks=0/81, in_queue=81, util=54.62%
Please acknowledge the following before creating a ticket
Description of the bug: Fio iolog replay does not report disk stats corrrectly, and the
utils
always be zero. And It can be fixed by the patch:Environment:
fio version:
Reproduction steps
Step1: Generates iolog
Write above scripts to gen_iolog.sh and run it.
Step2: Write config
Step3: Run fio
After fix with the patch in Description of the bug: