Closed geoffxy closed 2 years ago
Here are my results from earlier in the project. I looked at random R/W workloads running on top of ext4. Unfortunately I did not record the block size used, so these results should only guide our investigation.
Running a 50/50 R/W workload:
Run status group 3 (all jobs):
READ: bw=685MiB/s (718MB/s), 685MiB/s-685MiB/s (718MB/s-718MB/s), io=1021MiB (1070MB), run=1491-1491msec
WRITE: bw=689MiB/s (722MB/s), 689MiB/s-689MiB/s (722MB/s-722MB/s), io=1027MiB (1077MB), run=1491-1491msec
Note: Because it's 50/50, the achieved read/write bandwidths are expected to be equal. This is because the number of reads and writes must be equal, so the bandwidths will match the slower of read/write.
Running an independent read/write workload (reader just doing random reads, writer doing random writes):
Run status group 0 (all jobs):
READ: bw=1085MiB/s (1138MB/s), 1085MiB/s-1085MiB/s (1138MB/s-1138MB/s), io=31.8GiB (34.1GB), run=30005-30005msec
WRITE: bw=552MiB/s (579MB/s), 552MiB/s-552MiB/s (579MB/s-579MB/s), io=16.2GiB (17.4GB), run=30004-30004msec
Closing this for now - we have enough conclusions for the P4510.
We need to get a good understanding of how NVMe SSDs behave under read/write workloads. For example, is it possible to achieve the SSD's advertised peak sequential read and write bandwidths simultaneously? The answer to this question will influence TreeLine's design and/or implementation.
We should use
fio
for our benchmarking. We need to test out the following drive configurationsWe should try the following experiments and see what read/write bandwidth we can achieve:
We may also want to consider the effect of different I/O paths:
pwrite()
/pread()
)io_uring
libaio