Closed P-EB closed 1 year ago
Ok I made a PR that seems to do the job and also refines the way mbuffer pipes are added.
why isnt it always ideal? did you do some tests to different buffer sizes and can i see the results?
I have to redo my graphs just to be sure because someone messed with the server today during my test session and I want no external tampering. Just to give context, I did a lot of poking with zfs send/receive and mbuffer manually and I counter-intuitively found 2k to be the best chunk-size in my case.
In my case the sending and receiving points are separated by about 500 kms, and multiple network layers I can't change.
I'll repaste fresh graphs tomorrow, after having make sure no one logs in and messes with my servers during the tests.
ah cool. i think i just randomly decided it should be 128k , because record size is also 128k or something.
maybe i should omit a default, since then mbuffer will automatically decide it according to page size, which is 4k usually? maybe you can try that as well?
4k was less good than 2k in my case. I think your 128k rationale is a good one in general. The flexibility of the argument just seems worthwile for some specific cases.
I'm rerunning my tests I'll try to get you some graphs asap
@psy0rz So, here they are.
Now that I've tweaked my zfs on the receiving side, the perf delta is quite smaller actually (only 8.5%, before it was more like 25 to 30%).
2k
128k
It's of course reproducible.
ah nice, thanks for the info!
Hey,
The default chunk size of 128k for mbuffer is not always ideal. Would it be possible to either make it changeable or to accept a PR with the code modifications?
Cheers! PEB