While using NVMeVirt for comparison tests on ZNS, I've noticed that certain sequential read workloads on ZNS are much slower than the conventional SSD configuration, despite having the same NAND configuration in ssd_config.h.
Using local_clock(), I've narrowed the reason down to the zns_read() function consuming too much CPU cycles.
Under the specified fio workload, 99.9% percentile on zns_read() takes 80us while conv_read() only takes 681ns (115x difference). This seems to be an enough delay to fluctuate the actual I/O performance characteristics.
Adding an artificial udelay(80) under conv_read() indeed seems to bring the performance with those 2 inline with each other.
Is this a design fault in NVMeVirt, or would it be possible to fix this behavior?
Hello everyone,
While using NVMeVirt for comparison tests on ZNS, I've noticed that certain sequential read workloads on ZNS are much slower than the conventional SSD configuration, despite having the same NAND configuration in ssd_config.h.
Used fio workload:
Using
local_clock()
, I've narrowed the reason down to thezns_read()
function consuming too much CPU cycles. Under the specified fio workload, 99.9% percentile onzns_read()
takes 80us whileconv_read()
only takes 681ns (115x difference). This seems to be an enough delay to fluctuate the actual I/O performance characteristics.Adding an artificial
udelay(80)
underconv_read()
indeed seems to bring the performance with those 2 inline with each other.Is this a design fault in NVMeVirt, or would it be possible to fix this behavior?
Thanks in advance.