Open denadmin opened 6 years ago
I took a first look at BBR and it really looks like a method to eliminate problems that are caused by design decisions in TCP, which are derivative of the fact that TCP is trying to battle-test the link capacity.
One of the most important things introduced in UDT (our codebase) was that the transmission errors are immediately reacted by sending backwards the LOSSREPORT command, and then only the lost packets are being retransmitted (the "blind retransmission" happens only in case when loss report was lost itself). Additionally, the RTT is being constantly measured on the receiver party by calculating the time between sent ACK and received ACKACK commands (this calculated RTT is then sent with the next ACK command to the sender), this way making the calculated values more accurate. This gives it a significant advantage over TCP.
The congestion control is different in Live and File mode.
In Live mode it's simple: it's only about limiting the bandwidth that might be exceeded by the overhead transmission (such as retransmission), but the "clean transmission" bandwidth must be controlled by the source (or, simpler, you must know that, for a bitrate you have configured, your network connection satisfies the minimum required bandwidth) or otherwise you'll get quickly a connection break. The network characteristics are being constantly measured, but feeding them back to the application is all that can be done.
In File mode the situation is different - we don't have any minimum speed required, as well as we want to send the data as quick as possible, just without making the network choke if it turns out to be not capable enough. In this case probably researching some "better algorithm" can be interesting, but BBR is not necessarily the best approach here because SRT doesn't suffer of typical problems of TCP due to characteristics described above.
If you are interested, you can take a look at the Smoother sources and see how this algorithm looks like. The original UDT algorithm exists there as FileSmoother
class.
Note that QUIC - at least according to the official declarations - is a research project, which's goal is to test various approaches for a better congestion control algorithm for TCP, it's just - as this is the only possibility - a higher level protocol using UDP as an underlying transport layer (just like we do with SRT).
@ethouris, thanks for quick reply!
I have done some benchmarks, may be interesting.
Test VM: ubuntu, Linux 4.11.0 generic Network speed: 100mb/s (hardware limited) Sending single file, at about ~1Gb (960 888 413 bytes)
Tcp via HTTP: no loss tcp cubic 79.112 s loss 1% tcp cubic 81.916 s loss 3% tcp cubic 84.914 s loss 5% tcp cubic 122.891 s loss 10% tcp cubic 850.723 s
no loss tcp bbr 79.884 s loss 1% tcp brr 80.213 s loss 3% tcp bbr 81.696 s loss 5% tcp bbr 81.514 s loss 10% tcp bbr 82.855 s
Srt via srt-file-transmit: no loss srt 80.525 s loss 1% srt 152.466 s loss 3% srt 285.365 s loss 5% srt 636.215 s loss 10% srt fails(?)
May be BBR is not very suitable for srt, but looks like "better algorithm" :)
Thank you.
Hi, can you please describe your exact test setup? for example, which network emulator did you use with which settings?
@heidabyr , sure!
VM: Hyper-V.
First VM (server, sender):
Ubuntu x64 with kernel 4.11.0
Network limited to 100 mb/s via ethtool
Bad network emulated by tc
with command like:
tc qdisc add dev eth0 root netem loss 10%
http server: nginx
srt build from source yesterday
Second VM (client): Debian x64, Linux 3.16.0 No limits and no traffic shaping
srt also build from source yesterday
Is this enough? Let me know if any additional information is required.
hi! That could improve behavior in congested environments. Looks very interesting. Refs: For TCP: bbr For UDP (QUIC): chrome