Open shiretu opened 5 years ago
Just as a note: Using such a setting (assuming the receiver will do what it is supposed to do), would result in a very bad performance, since the congestion control on the sender side assumes that it receives a SACK for every other packet.
So the intend for the socket option was to allow to
Of course your suggested configuration should work, although it won't behave like you expect, I guess.
@tuexen - You were right about the performance. I have basically implemented that delayed selective ack in my app, just to see how it affects performance. It took a deep plunge.
I guess the bug report still stands, but as you said, it will not have the expected outcome when/if fixed. Are there any ways to lower down the SACks count without affecting performance? I suspect that the kernel/bitrate/networkInterfaces/cpu is not very happy when the app sends 20k/sec packets :)
I'm trying to debug NAT traversal speed issues (see this for more details).
One of the things I have discovered is the insane amount of SACKs sent. Literally every single packet is ACK-ed. It generates enormous amounts of traffic. At some point, the SACKs sending takes a very very very long break. Would very much like to lower down the frequency, of SACKs to something like 1 SACK at 200 packets or twice a second. The R/W buffers are hefty, they can hold a lot of data. The number of buffers is also 128k (
usrsctp_sysctl_set_sctp_max_chunks_on_queue
)For this, I have used:
Just to be sure, I wanted to print them:
And the output:
Finally, I have written a very small function which looks out for SACKs in the output queue for the receiver of the real data and here is what I see:
Notice how
CTSNACK
is going up by 1. For clarity,diff
is the diff between currentCTSNACK
and the last sentCTSNACK
. I've seen that 4 at the beginning, but never anything else than 1 after that.