Haivision / srt

Secure, Reliable, Transport
https://www.srtalliance.org
Mozilla Public License 2.0
3.06k stars 839 forks source link

Bad restriction: RCV Buffer <= Flow control window #700

Open maxsharabayko opened 5 years ago

maxsharabayko commented 5 years ago

SRT has a restriction that the receiver buffer must not be greater than FC size: if (m_iRcvBufSize > m_iFlightFlagSize) { m_iRcvBufSize = m_iFlightFlagSize; }

At the same time, the TSBPD algorithm buffers received packets in the receiver’s buffer until it is time to deliver them further. So the condition does not make sense. It should rather be like: Rcv_buf >= fc_window*packet_size + (latency*bitrate)

The number of packet in flight is also controlled by the available space in the receiver buffer, reported back by the receiver. On top of that, File CC controls the congestion window. So for file mode this probably does not hurt much, but still does not look reasonable.

Buffer Sizes Configuration

Default receiver buffer size is 8192 × (1500-28) = 12058624 bytes or approximately 96 Mbits. Default flow control window size is 25600 packets or approximately 300 Mbits.

The target number of packets in flight (FC window) should be (assuming max payload size): FC = bps × RTTsec / 8 / (1500 - 28). For example for 1,5 Gbps link with 150 ms RTT: FC = 1500 × 106 × 0.15 / 8 / (1472) = 19106 packets (or 225 Mbits). For 2,0 Gbps link with 150 ms RTT: FC = 2 × 109 × 0.15 / 8 / (1472) = 25475 packets (or 300 Mbits).

maxsharabayko commented 4 years ago

The number of packet in flight is also controlled by the available space in the receiver buffer, reported back by the receiver. On top of that, File CC controls the congestion window.

This is probably the source of the restriction. If RCV Buffer > Flow control window, then the receiver will be reporting more available space in its buffer (via ACK), than the value shared in the handshake.

ethouris commented 3 years ago

I personally think that the receiver buffer should have never been used for latency compensation. The UDT original that was supporting only file transmission was treating the receiver buffer as intermediate place between the network and the application, that is, the application could make the buffer choke (when it's not reading the data), the overflowing receiver buffer could stop the transmission (just should still send ACKs periodically), then when the application finally reads the data from the buffer, it frees its space and allows to continue.

This method cannot work properly in live mode at all. In live mode the data are being sent with appropriate speed and this process cannot be paused; swelling buffer should be handled completely differently. All data that come in must be received and must be placed in the receiver buffer. Latency compensation should use a separate buffer that grows separately and will drop the data by itself in case when the application doesn't read them. The latency buffer - in contradiction to the socket's receiver buffer - should be allowed to grow and the grow control should be done differently, mainly by having the top bitrate allowed for reception and latency. This should determine the initial size, and some spare value should be given to allow the buffer to grow up to. As the receiver buffer is to be reimplemented, this is an opportunity to make a separate latency buffer, although using the "units" from the same pool as the receiver buffer.

This also gives opportunity to solve the application-pause problem differently - when the buffer grows up to the maximum allowed size, packets at the head of the buffer should be dropped in order to make space for the new ones. In the current implementation there's no good solution for it, this causes a discrepancy that can't be repaired so the only possible solution is to break the connection.