Open jvinolas opened 1 year ago
This is our setup:
And versions:
dkms status dm-writeboost/2.2.13, 5.15.0-1029-oracle, x86_64: installed dm-writeboost/2.2.13, 5.15.0-1030-oracle, x86_64: installed
writeboost-tools: =head1 VERSION 1.20160718
We are using random io fio:
fio --filename=/mnt/test.fio --size=4GB --direct=1 --rw=randrw --bs=4k --ioengine=libaio --iodepth=256 --runtime=120 --numjobs=4 --time_based --group_reporting --name=iops-test-job --eta-newline=1
Raw backing device performance mounted in /mnt:
iops-test-job: (groupid=0, jobs=4): err= 0: pid=17716: Mon Mar 27 09:59:55 2023 read: IOPS=50.8k, BW=198MiB/s (208MB/s)(23.3GiB/120506msec) slat (nsec): min=1823, max=18629k, avg=37180.46, stdev=177245.02 clat (usec): min=329, max=1099.8k, avg=9104.75, stdev=22438.53 lat (usec): min=408, max=1099.8k, avg=9142.10, stdev=22442.78 clat percentiles (msec): | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 6], | 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 8], | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 11], 95.00th=[ 13], | 99.00th=[ 65], 99.50th=[ 144], 99.90th=[ 338], 99.95th=[ 451], | 99.99th=[ 751] bw ( KiB/s): min=159336, max=479936, per=100.00%, avg=203938.77, stdev=7146.38, samples=960 iops : min=39834, max=119984, avg=50984.49, stdev=1786.59, samples=960 write: IOPS=50.7k, BW=198MiB/s (208MB/s)(23.3GiB/120506msec); 0 zone resets slat (usec): min=2, max=18722, avg=37.48, stdev=176.24 clat (usec): min=398, max=1248.4k, avg=10991.29, stdev=22484.09 lat (usec): min=403, max=1248.4k, avg=11028.95, stdev=22494.92 clat percentiles (msec): | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 6], | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 8], 60.00th=[ 9], | 70.00th=[ 10], 80.00th=[ 12], 90.00th=[ 17], 95.00th=[ 22], | 99.00th=[ 71], 99.50th=[ 148], 99.90th=[ 347], 99.95th=[ 456], | 99.99th=[ 651] bw ( KiB/s): min=157520, max=481760, per=100.00%, avg=203752.33, stdev=7191.61, samples=960 iops : min=39380, max=120440, avg=50937.85, stdev=1797.89, samples=960 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% lat (msec) : 2=0.24%, 4=6.76%, 10=71.50%, 20=17.74%, 50=2.57% lat (msec) : 100=0.44%, 250=0.54%, 500=0.18%, 750=0.03%, 1000=0.01% lat (msec) : 2000=0.01% cpu : usr=3.18%, sys=22.57%, ctx=1521822, majf=0, minf=69 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% issued rwts: total=6119173,6113707,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=256 Run status group 0 (all jobs): READ: bw=198MiB/s (208MB/s), 198MiB/s-198MiB/s (208MB/s-208MB/s), io=23.3GiB (25.1GB), run=120506-120506msec WRITE: bw=198MiB/s (208MB/s), 198MiB/s-198MiB/s (208MB/s-208MB/s), io=23.3GiB (25.0GB), run=120506-120506msec Disk stats (read/write): dm-1: ios=6119173/6113730, merge=0/0, ticks=17572048/28593512, in_queue=46165560, util=100.00%, aggrios=1529793/1528430, aggrmerge=0/0, aggrticks=4391872/7149452, aggrin_queue=11541324, aggrutil=100.00% md4: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00% sdu: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% sdb: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% sds: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% sdh: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% md2: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00% sdf: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% sdm: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% sdl: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% sdp: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% md3: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00% sdk: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% sdc: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% sdj: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% sdr: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% md1: ios=6119173/6113722, merge=0/0, ticks=17567488/28597808, in_queue=46165296, util=100.00%, aggrios=1529242/1526902, aggrmerge=551/1527, aggrticks=4361038/7104926, aggrin_queue=11465964, aggrutil=100.00% sdd: ios=1527969/1527004, merge=702/1841, ticks=3501939/4870269, in_queue=8372207, util=99.66% sdi: ios=1530579/1526535, merge=276/1336, ticks=6226262/12028518, in_queue=18254780, util=100.00% sdq: ios=1528970/1525936, merge=894/2315, ticks=4930619/7592824, in_queue=12523443, util=99.86% sdg: ios=1529451/1528136, merge=332/619, ticks=2785333/3928094, in_queue=6713427, util=99.66%
Raw caching device performance mounted in /mnt:
iops-test-job: (groupid=0, jobs=4): err= 0: pid=17619: Mon Mar 27 09:54:17 2023 read: IOPS=101k, BW=396MiB/s (415MB/s)(46.4GiB/120024msec) slat (nsec): min=1503, max=6327.8k, avg=17312.56, stdev=75036.85 clat (usec): min=351, max=59929, avg=4613.67, stdev=2826.55 lat (usec): min=497, max=59935, avg=4631.16, stdev=2831.15 clat percentiles (usec): | 1.00th=[ 1614], 5.00th=[ 2147], 10.00th=[ 2507], 20.00th=[ 2999], | 30.00th=[ 3392], 40.00th=[ 3720], 50.00th=[ 4080], 60.00th=[ 4490], | 70.00th=[ 4948], 80.00th=[ 5604], 90.00th=[ 6652], 95.00th=[ 7898], | 99.00th=[18744], 99.50th=[23462], 99.90th=[31327], 99.95th=[33817], | 99.99th=[38011] bw ( KiB/s): min=345256, max=512080, per=100.00%, avg=405746.00, stdev=6716.15, samples=956 iops : min=86314, max=128020, avg=101436.48, stdev=1679.03, samples=956 write: IOPS=101k, BW=396MiB/s (415MB/s)(46.4GiB/120024msec); 0 zone resets slat (nsec): min=1664, max=17004k, avg=17960.55, stdev=75999.86 clat (usec): min=377, max=83902, avg=5458.70, stdev=3618.51 lat (usec): min=508, max=83907, avg=5476.84, stdev=3625.82 clat percentiles (usec): | 1.00th=[ 1729], 5.00th=[ 2311], 10.00th=[ 2704], 20.00th=[ 3228], | 30.00th=[ 3654], 40.00th=[ 4080], 50.00th=[ 4555], 60.00th=[ 5080], | 70.00th=[ 5800], 80.00th=[ 6915], 90.00th=[ 8848], 95.00th=[10945], | 99.00th=[22152], 99.50th=[27395], 99.90th=[35914], 99.95th=[38536], | 99.99th=[43779] bw ( KiB/s): min=346328, max=515721, per=100.00%, avg=405601.75, stdev=6701.52, samples=956 iops : min=86582, max=128930, avg=101400.41, stdev=1675.38, samples=956 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% lat (msec) : 2=2.82%, 4=39.97%, 10=52.45%, 20=3.66%, 50=1.06% lat (msec) : 100=0.01% cpu : usr=6.26%, sys=43.45%, ctx=3589695, majf=0, minf=75 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% issued rwts: total=12158299,12154051,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=256 Run status group 0 (all jobs): READ: bw=396MiB/s (415MB/s), 396MiB/s-396MiB/s (415MB/s-415MB/s), io=46.4GiB (49.8GB), run=120024-120024msec WRITE: bw=396MiB/s (415MB/s), 396MiB/s-396MiB/s (415MB/s-415MB/s), io=46.4GiB (49.8GB), run=120024-120024msec Disk stats (read/write): md100: ios=12150696/12146334, merge=0/0, ticks=20782904/29676060, in_queue=50458964, util=100.00%, aggrios=3038038/3035159, aggrmerge=1536/3357, aggrticks=5215879/7423800, aggrin_queue=12639679, aggrutil=99.96% sdo: ios=3037644/3034162, merge=1793/4015, ticks=4922022/6648328, in_queue=11570351, util=99.94% sde: ios=3037544/3035054, merge=1858/4274, ticks=6311280/9757117, in_queue=16068397, util=99.96% sdn: ios=3038549/3037893, merge=458/630, ticks=3742331/3876633, in_queue=7618965, util=99.94% sdt: ios=3038417/3033527, merge=2036/4511, ticks=5887883/9413123, in_queue=15301006, util=99.96%
And now with the writeboost cache mounted. This is /etc/writeboosttab:
cached /dev/mapper/vg_nfs-lv_nfs /dev/md100 writeback_threshold=70,sync_data_interval=3600
# dd if=/dev/zero of=/dev/md100 bs=512 count=1 1+0 records in 1+0 records out 512 bytes copied, 0.00332719 s, 154 kB/s # writeboost <13>Mar 27 10:01:54 writeboost: mapping cached <13>Mar 27 10:02:41 writeboost: cached mapped. # dmsetup status cached: 0 342725468160 writeboost 127 693804810 5463030 2 1 1 0 0 261 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 10 writeback_threshold 70 nr_cur_batched_writeback 1 sync_data_interval 3600 update_sb_record_interval 0 read_cache_threshold 0 vg_nfs-lv_nfs: 0 85898280960 linear vg_nfs-lv_nfs: 85898280960 85898280960 linear vg_nfs-lv_nfs: 171796561920 85898280960 linear vg_nfs-lv_nfs: 257694842880 85030625280 linear
And this are the fio results when cache mounted at /mnt:
iops-test-job: (groupid=0, jobs=4): err= 0: pid=18203: Mon Mar 27 10:07:30 2023 read: IOPS=46.7k, BW=182MiB/s (191MB/s)(21.4GiB/120004msec) slat (usec): min=3, max=78504, avg=36.47, stdev=384.67 clat (usec): min=716, max=95759, avg=11724.95, stdev=5307.31 lat (usec): min=722, max=95763, avg=11761.54, stdev=5324.95 clat percentiles (usec): | 1.00th=[ 4293], 5.00th=[ 5800], 10.00th=[ 6718], 20.00th=[ 7701], | 30.00th=[ 8717], 40.00th=[ 9634], 50.00th=[10552], 60.00th=[11600], | 70.00th=[12911], 80.00th=[14746], 90.00th=[18220], 95.00th=[21890], | 99.00th=[30802], 99.50th=[34341], 99.90th=[43254], 99.95th=[47449], | 99.99th=[67634] bw ( KiB/s): min=133512, max=235829, per=100.00%, avg=187086.92, stdev=4402.79, samples=956 iops : min=33378, max=58957, avg=46771.65, stdev=1100.69, samples=956 write: IOPS=46.7k, BW=182MiB/s (191MB/s)(21.4GiB/120004msec); 0 zone resets slat (usec): min=2, max=65011, avg=46.33, stdev=316.61 clat (usec): min=723, max=92986, avg=10118.39, stdev=4986.29 lat (usec): min=728, max=92992, avg=10164.85, stdev=5000.87 clat percentiles (usec): | 1.00th=[ 3163], 5.00th=[ 4621], 10.00th=[ 5276], 20.00th=[ 6587], | 30.00th=[ 7308], 40.00th=[ 8094], 50.00th=[ 8979], 60.00th=[ 9896], | 70.00th=[11207], 80.00th=[12911], 90.00th=[16057], 95.00th=[19792], | 99.00th=[28443], 99.50th=[31851], 99.90th=[40109], 99.95th=[44303], | 99.99th=[62653] bw ( KiB/s): min=135032, max=235118, per=100.00%, avg=186988.64, stdev=4375.59, samples=956 iops : min=33758, max=58779, avg=46747.09, stdev=1093.90, samples=956 lat (usec) : 750=0.01%, 1000=0.01% lat (msec) : 2=0.04%, 4=1.38%, 10=51.01%, 20=41.62%, 50=5.93% lat (msec) : 100=0.02% cpu : usr=2.72%, sys=24.73%, ctx=382240, majf=0, minf=67 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% issued rwts: total=5605193,5602096,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=256 Run status group 0 (all jobs): READ: bw=182MiB/s (191MB/s), 182MiB/s-182MiB/s (191MB/s-191MB/s), io=21.4GiB (23.0GB), run=120004-120004msec WRITE: bw=182MiB/s (191MB/s), 182MiB/s-182MiB/s (191MB/s-191MB/s), io=21.4GiB (22.9GB), run=120004-120004msec Disk stats (read/write): dm-0: ios=5604887/5601840, merge=0/0, ticks=5443692/241780, in_queue=5685472, util=100.00%, aggrios=2820929/1252730, aggrmerge=0/0, aggrticks=2688984/1384044, aggrin_queue=4073028, aggrutil=100.00% dm-1: ios=224/2461322, merge=0/0, ticks=156/2649680, in_queue=2649836, util=24.16%, aggrios=56/615044, aggrmerge=0/0, aggrticks=39/662004, aggrin_queue=662043, aggrutil=24.15% md4: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00% sdu: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% sdb: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% sds: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% sdh: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% md2: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00% sdf: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% sdm: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% sdl: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% sdp: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% md3: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00% sdk: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% sdc: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% sdj: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% sdr: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% md1: ios=224/2460176, merge=0/0, ticks=156/2648016, in_queue=2648172, util=24.15%, aggrios=56/563970, aggrmerge=0/51073, aggrticks=37/533277, aggrin_queue=533314, aggrutil=22.54% sdd: ios=60/563518, merge=0/50970, ticks=37/486542, in_queue=486580, util=22.32% sdi: ios=57/562854, merge=0/51075, ticks=46/568030, in_queue=568076, util=22.42% sdq: ios=49/565027, merge=0/51035, ticks=29/553755, in_queue=553783, util=22.27% sdg: ios=58/564483, merge=0/51214, ticks=37/524783, in_queue=524820, util=22.54% md100: ios=5641635/44139, merge=0/0, ticks=5377812/118408, in_queue=5496220, util=100.00%, aggrios=1410408/11034, aggrmerge=0/0, aggrticks=1353057/29697, aggrin_queue=1382755, aggrutil=99.43% sdo: ios=1412121/11035, merge=0/0, ticks=1309690/28852, in_queue=1338543, util=99.20% sde: ios=1410073/11034, merge=1/0, ticks=1403988/30899, in_queue=1434886, util=99.43% sdn: ios=1409218/11035, merge=0/0, ticks=1345555/29098, in_queue=1374654, util=99.35% sdt: ios=1410222/11035, merge=0/0, ticks=1352998/29941, in_queue=1382938, util=99.28%
And sar -d 1 while executing fio on cached mount:
sar -d 1
Average: DEV tps rkB/s wkB/s dkB/s areq-sz aqu-sz await %util Average: loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: loop2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: loop3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: loop4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: loop5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: loop6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: loop7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sda 7.46 0.00 34.77 0.00 4.66 0.00 0.66 0.17 Average: loop8 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: md4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sdd 2538.42 0.46 37036.46 0.00 14.59 2.49 0.98 18.83 Average: sde 6823.08 67416.31 49841.85 0.00 17.19 7.05 1.03 90.65 Average: sdf 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sdg 2539.12 1.38 37055.69 0.00 14.59 2.72 1.07 19.77 Average: sdh 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: md1 37073.23 3.54 148289.38 0.00 4.00 108.35 2.92 21.46 Average: md3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: md2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sdi 2540.58 1.08 37032.15 0.00 14.58 2.99 1.18 20.02 Average: sdj 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sdk 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sdl 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sdm 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sdn 6793.19 67280.62 49845.23 0.00 17.24 6.69 0.99 92.05 Average: sdo 6810.27 67355.23 49871.23 0.00 17.21 6.55 0.96 90.75 Average: sdp 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sdq 2569.96 0.62 37147.54 0.00 14.45 2.90 1.13 19.65 Average: sdr 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sds 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sdt 6811.15 67371.69 49845.08 0.00 17.21 6.74 0.99 91.23 Average: loop9 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sdu 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: dm-1 37085.73 3.54 148299.23 0.00 4.00 108.37 2.92 21.49 Average: md100 27238.46 269407.54 199403.38 0.00 17.21 26.70 0.98 99.83 Average: dm-0 75989.85 106111.23 197847.69 0.00 4.00 27.33 0.36 98.78
It's like we are not getting the cache device performance. Instead we are seeing more or less the same performance as the backing device. This is the lsblk:
sdb 8:16 0 10T 0 disk └─md4 9:4 0 40T 0 raid0 └─vg_nfs-lv_nfs 253:1 0 159.6T 0 lvm └─cached 253:0 0 159.6T 0 dm /mnt sdc 8:32 0 10T 0 disk └─md3 9:3 0 40T 0 raid0 └─vg_nfs-lv_nfs 253:1 0 159.6T 0 lvm └─cached 253:0 0 159.6T 0 dm /mnt sdd 8:48 0 10T 0 disk └─md1 9:1 0 40T 0 raid0 └─vg_nfs-lv_nfs 253:1 0 159.6T 0 lvm └─cached 253:0 0 159.6T 0 dm /mnt sde 8:64 0 667G 0 disk └─md100 9:100 0 2.6T 0 raid0 └─cached 253:0 0 159.6T 0 dm /mnt sdf 8:80 0 10T 0 disk └─md2 9:2 0 40T 0 raid0 └─vg_nfs-lv_nfs 253:1 0 159.6T 0 lvm └─cached 253:0 0 159.6T 0 dm /mnt sdg 8:96 0 10T 0 disk └─md1 9:1 0 40T 0 raid0 └─vg_nfs-lv_nfs 253:1 0 159.6T 0 lvm └─cached 253:0 0 159.6T 0 dm /mnt sdh 8:112 0 10T 0 disk └─md4 9:4 0 40T 0 raid0 └─vg_nfs-lv_nfs 253:1 0 159.6T 0 lvm └─cached 253:0 0 159.6T 0 dm /mnt sdi 8:128 0 10T 0 disk └─md1 9:1 0 40T 0 raid0 └─vg_nfs-lv_nfs 253:1 0 159.6T 0 lvm └─cached 253:0 0 159.6T 0 dm /mnt sdj 8:144 0 10T 0 disk └─md3 9:3 0 40T 0 raid0 └─vg_nfs-lv_nfs 253:1 0 159.6T 0 lvm └─cached 253:0 0 159.6T 0 dm /mnt sdk 8:160 0 10T 0 disk └─md3 9:3 0 40T 0 raid0 └─vg_nfs-lv_nfs 253:1 0 159.6T 0 lvm └─cached 253:0 0 159.6T 0 dm /mnt sdl 8:176 0 10T 0 disk └─md2 9:2 0 40T 0 raid0 └─vg_nfs-lv_nfs 253:1 0 159.6T 0 lvm └─cached 253:0 0 159.6T 0 dm /mnt sdm 8:192 0 10T 0 disk └─md2 9:2 0 40T 0 raid0 └─vg_nfs-lv_nfs 253:1 0 159.6T 0 lvm └─cached 253:0 0 159.6T 0 dm /mnt sdn 8:208 0 667G 0 disk └─md100 9:100 0 2.6T 0 raid0 └─cached 253:0 0 159.6T 0 dm /mnt sdo 8:224 0 667G 0 disk └─md100 9:100 0 2.6T 0 raid0 └─cached 253:0 0 159.6T 0 dm /mnt sdp 8:240 0 10T 0 disk └─md2 9:2 0 40T 0 raid0 └─vg_nfs-lv_nfs 253:1 0 159.6T 0 lvm └─cached 253:0 0 159.6T 0 dm /mnt sdq 65:0 0 10T 0 disk └─md1 9:1 0 40T 0 raid0 └─vg_nfs-lv_nfs 253:1 0 159.6T 0 lvm └─cached 253:0 0 159.6T 0 dm /mnt sdr 65:16 0 10T 0 disk └─md3 9:3 0 40T 0 raid0 └─vg_nfs-lv_nfs 253:1 0 159.6T 0 lvm └─cached 253:0 0 159.6T 0 dm /mnt sds 65:32 0 10T 0 disk └─md4 9:4 0 40T 0 raid0 └─vg_nfs-lv_nfs 253:1 0 159.6T 0 lvm └─cached 253:0 0 159.6T 0 dm /mnt sdt 65:48 0 667G 0 disk └─md100 9:100 0 2.6T 0 raid0 └─cached 253:0 0 159.6T 0 dm /mnt sdu 65:64 0 10T 0 disk └─md4 9:4 0 40T 0 raid0 └─vg_nfs-lv_nfs 253:1 0 159.6T 0 lvm └─cached 253:0 0 159.6T 0 dm /mnt
Any ideas why this could be happening?
Thanks!
This is our setup:
And versions:
We are using random io fio:
Raw backing device performance mounted in /mnt:
Raw caching device performance mounted in /mnt:
And now with the writeboost cache mounted. This is /etc/writeboosttab:
And this are the fio results when cache mounted at /mnt:
And
sar -d 1
while executing fio on cached mount:It's like we are not getting the cache device performance. Instead we are seeing more or less the same performance as the backing device. This is the lsblk:
Any ideas why this could be happening?
Thanks!