Open gaowayne opened 1 year ago
Seems you were corrupting memory. You must ensure the service.read_fixed
s are finished before std::vector<char> buf
gets destroyed.
Seems you were corrupting memory. You must ensure the
service.read_fixed
s are finished beforestd::vector<char> buf
gets destroyed.
got it. :) do you have email or wechat, so that can connect offline? :)
Just use Github please.
Seems you were corrupting memory. You must ensure the
service.read_fixed
s are finished beforestd::vector<char> buf
gets destroyed.
hello I read your comment and code again. the buf is out of for loop. this buf will not be freed. but I feel you mean that multiple IO write the same memory will corrupt the information. actually it is fine for me. I just want to measure the performance with this framework, currently I do not need worry about data consistent.
I see it is around 2000MB/s when I run over one NVMe SSD that have 6GB/s BW. could you please shed some light how to tune this? :)
service.read_fixed(0, buf.data(), buf.size(), offset, 0, IOSQE_FIXED_FILE ) | panic_on_err("read_fixed(1)", false);
This is an async operation, which returns immediately without waiting the I/O to be finished. That is too say, when readnvme
returns and buf
gets destroyed, there are I/O operations running ( or pending in I/O queue ) in background. Thus use-after-free will occur.
link_cp
work? off_t offset = 0;
for (; offset < insize - BS; offset += BS) {
service.read_fixed(0, buf.data(), buf.size(), offset, 0, IOSQE_FIXED_FILE | IOSQE_IO_LINK) | panic_on_err("read_fixed(1)", false);
service.write_fixed(1, buf.data(), buf.size(), offset, 0, IOSQE_FIXED_FILE | IOSQE_IO_LINK) | panic_on_err("write_fixed(1)", false);
}
int left = insize - offset;
if (left)
{
service.read_fixed(0, buf.data(), left, offset, 0, IOSQE_FIXED_FILE | IOSQE_IO_LINK) | panic_on_err("read_fixed(2)", false);
service.write_fixed(1, buf.data(), left, offset, 0, IOSQE_FIXED_FILE | IOSQE_IO_LINK) | panic_on_err("write_fixed(2)", false);
}
co_await service.fsync(1, 0, IOSQE_FIXED_FILE);
link_cp
queues every read
/ write
operations with IOSQE_IO_LINK
, which ensures all I/O operations runs in sequence.
For example: READ (1) -> WRITE (2) -> READ (3) -> WRITE (4) -> FSYNC (5)
5 won't start before 4 gets finished; 4 won't start before 3 gets finished... 2 won't start before 1 gets finished.
At the end, we wait for 5 gets finished with co_await service.fsync(1, 0, IOSQE_FIXED_FILE);
, so that we can ensure all queued I/O operations are correctly finished before the function returns.
Don't talk about performance before you get things correct.
service.read_fixed(0, buf.data(), buf.size(), offset, 0, IOSQE_FIXED_FILE ) | panic_on_err("read_fixed(1)", false);
This is an async operation, which returns immediately without waiting the I/O to be finished. That is too say, when
readnvme
returns andbuf
gets destroyed, there are I/O operations running ( or pending in I/O queue ) in background. Thus use-after-free will occur.
- But why does
link_cp
work?off_t offset = 0; for (; offset < insize - BS; offset += BS) { service.read_fixed(0, buf.data(), buf.size(), offset, 0, IOSQE_FIXED_FILE | IOSQE_IO_LINK) | panic_on_err("read_fixed(1)", false); service.write_fixed(1, buf.data(), buf.size(), offset, 0, IOSQE_FIXED_FILE | IOSQE_IO_LINK) | panic_on_err("write_fixed(1)", false); } int left = insize - offset; if (left) { service.read_fixed(0, buf.data(), left, offset, 0, IOSQE_FIXED_FILE | IOSQE_IO_LINK) | panic_on_err("read_fixed(2)", false); service.write_fixed(1, buf.data(), left, offset, 0, IOSQE_FIXED_FILE | IOSQE_IO_LINK) | panic_on_err("write_fixed(2)", false); } co_await service.fsync(1, 0, IOSQE_FIXED_FILE);
link_cp
queues everyread
/write
operations withIOSQE_IO_LINK
, which ensures all I/O operations runs in sequence.For example: READ (1) -> WRITE (2) -> READ (3) -> WRITE (4) -> FSYNC (5)
5 won't start before 4 gets finished; 4 won't start before 3 gets finished... 2 won't start before 1 gets finished.
At the end, we wait for 5 gets finished with
co_await service.fsync(1, 0, IOSQE_FIXED_FILE);
, so that we can ensure all queued I/O operations are correctly finished before the function returns.
actually, I already put buf into global variable, it will not get freed during process running. double free bug and use after free bug will cause process crash error. it will not impact the performance I feel.
Don't talk about performance before you get things correct.
OK, I will pre-allocate memory buffer for this experiment. but I feel one global buf does not impact the performance result.
hello expert.
I change the link_cp code a little to to read on one fast NVMe, its read capabilities are 6000MB/s
with below code I can only reach 2400MB/s, what should be bottleneck?
just run above code with link_cp /dev/nvme0n1. then run iostat you will know BW.