Closed HKalbasi closed 5 months ago
I don't think that huge amount of buffer is needed in the reader mode to use the bandwidth of the link, since packets are coming in order and we can deliver receiving packets immediately to the application.
a big buffer is needed in the reader because it controls the advertised window size. The writer will not send data unless it knows the reader has buffer space for it.
The expected bandwidth is min(rx_buffer, tx_buffer) / rtt
. The results you're getting are roughtly in that order of magnitude.
In theory you could hack the reader to advertise a much bigger window size, hoping that when data does arrive the application will have consumed earlier data and there'll be buffer space. If there's not, it'd be forced to drop the data on the floor, which would be seen as packet loss by the sender which would trash performance, so it'd be tricky to get right. I'm not aware of any network stack doing that.
Thanks for the response. It makes sense. I will close this issue as it doesn't seems correct to do it.
Do you find an example containing link delay helpful? If so, I can clean up my changes and make a PR.
Another question: Is it possible to make the buffer growable for applications that can afford heap allocation? To make buffer big only for sockets that actually need that.
I added a
DelayInjector
device to thebenchmark
example to simulate links with delay. Here is my code, and here is the results:I don't think that huge amount of buffer is needed in the
reader
mode to use the bandwidth of the link, since packets are coming in order and we can deliver receiving packets immediately to the application. In thewriter
mode, the huge buffer is needed since we need to buffer sending data until receiving the ack, so we need a buffer of size RTT*Bandwidth.Am I right? Is it possible to solve this problem without changing the buffer size?