The current implementation does not allow for very large buffers since the client side does both Rx and Tx into static local buffers, meaning that testing with e.g. 4G of data is going to take ~8G of memory (and/or swap like mad).
This PR improves things in this use case by:
Adding a lower bound to the randomly selected data size, so you can constrain to e.g. 4-5G rather than just 0-5G, meaning you can concentrate testing on the larger sizes if you want to.
Doing both Tx and Rx on the client side in randomly sized batches (also with configurable min and max). Note that the server side still uses io.Copy and hence a buffer of implementation defined (and likely static) size.
Measuring and reporting the duration (really just for interest)
It also detects Tx and Rx hangs on the client side (which doesn't really relate to using larger buffers but I had an itch).
The current implementation does not allow for very large buffers since the client side does both Rx and Tx into static local buffers, meaning that testing with e.g. 4G of data is going to take ~8G of memory (and/or swap like mad).
This PR improves things in this use case by:
io.Copy
and hence a buffer of implementation defined (and likely static) size.It also detects Tx and Rx hangs on the client side (which doesn't really relate to using larger buffers but I had an itch).