quinn-rs / quinn

Async-friendly QUIC implementation in Rust
Apache License 2.0
3.59k stars 367 forks source link

Transfer rate / congestion control issues #1372

Open yu-re-ka opened 2 years ago

yu-re-ka commented 2 years ago

Similar to #688, but I can provide some more details

Kernel TCP, set to BBR congestion control

[ ID] Interval           Transfer     Bitrate
[  5]   0.00-5.00   sec   492 MBytes   825 Mbits/sec                  
[  5]   5.00-10.00  sec   538 MBytes   902 Mbits/sec                  
[  5]  10.00-15.00  sec   487 MBytes   817 Mbits/sec                  
[  5]  15.00-20.00  sec   511 MBytes   857 Mbits/sec                  
[  5]  20.00-25.00  sec   486 MBytes   816 Mbits/sec                  
[  5]  25.00-30.00  sec   524 MBytes   878 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.05  sec  2.97 GBytes   849 Mbits/sec  23218             sender
[  5]   0.00-30.00  sec  2.97 GBytes   849 Mbits/sec                  receiver

latency without load

rtt min/avg/max/mdev = 21.709/24.690/34.798/2.117 ms

Raw UDP sending 1000Mbps

[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-5.00   sec   544 MBytes   913 Mbits/sec  0.015 ms  34999/434386 (8.1%)  
[  5]   5.00-10.00  sec   535 MBytes   898 Mbits/sec  0.024 ms  44983/437843 (10%)  
[  5]  10.00-15.00  sec   541 MBytes   908 Mbits/sec  0.020 ms  40315/437594 (9.2%)  
[  5]  15.00-20.00  sec   553 MBytes   927 Mbits/sec  0.010 ms  31913/437792 (7.3%)  
[  5]  20.00-25.00  sec   560 MBytes   939 Mbits/sec  0.011 ms  26200/437327 (6%)  
[  5]  25.00-30.00  sec   549 MBytes   921 Mbits/sec  0.014 ms  33671/436865 (7.7%)  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-30.02  sec  3.50 GBytes  1000 Mbits/sec  0.000 ms  0/2628088 (0%)  sender
[SUM]  0.0-30.0 sec  408 datagrams received out-of-order
[  5]   0.00-30.00  sec  3.20 GBytes   918 Mbits/sec  0.014 ms  212081/2621807 (8.1%)  receiver

rtt min/avg/max/mdev = 22.225/34.632/103.211/10.036 ms

Raw UDP sending 500Mbps

[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-5.00   sec   298 MBytes   500 Mbits/sec  0.027 ms  0/218831 (0%)  
[  5]   5.00-10.00  sec   298 MBytes   500 Mbits/sec  0.018 ms  5/218862 (0.0023%)  
[  5]  10.00-15.00  sec   298 MBytes   500 Mbits/sec  0.027 ms  0/218796 (0%)  
[  5]  15.00-20.00  sec   298 MBytes   500 Mbits/sec  0.036 ms  4/218833 (0.0018%)  
[  5]  20.00-25.00  sec   298 MBytes   500 Mbits/sec  0.034 ms  0/218799 (0%)  
[  5]  25.00-30.00  sec   298 MBytes   500 Mbits/sec  0.033 ms  1/218888 (0.00046%)  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-30.06  sec  1.75 GBytes   500 Mbits/sec  0.000 ms  0/1315635 (0%)  sender
[SUM]  0.0-30.1 sec  138 datagrams received out-of-order
[  5]   0.00-30.00  sec  1.75 GBytes   500 Mbits/sec  0.033 ms  10/1313009 (0.00076%)  receiver

rtt min/avg/max/mdev = 19.668/33.002/72.828/8.129 ms

Quinn, file size 50M

response received in 5.148837547s - 9943.992 KiB/s
response received in 5.635715104s - 9084.916 KiB/s
response received in 6.603855449s - 7753.0464 KiB/s
response received in 7.291501485s - 7021.8735 KiB/s
response received in 5.705226162s - 8974.229 KiB/s
response received in 4.70061292s - 10892.196 KiB/s
response received in 7.319232982s - 6995.2686 KiB/s
response received in 6.268942101s - 8167.2476 KiB/s

Quinn + BBR, file size 200M

response received in 7.000843367s - 29253.617 KiB/s
response received in 5.47936758s - 37376.574 KiB/s
response received in 6.003695168s - 34112.324 KiB/s
response received in 7.537745984s - 27169.926 KiB/s
response received in 4.945155007s - 41414.273 KiB/s
response received in 5.142270122s - 39826.77 KiB/s
response received in 4.977820587s - 41142.504 KiB/s
response received in 5.500932966s - 37230.05 KiB/s

Quinn + BBR, file size 200M, loopback

response received in 1.90831443s - 268299.6 KiB/s
response received in 1.876121678s - 272903.4 KiB/s
response received in 1.814637982s - 282149.97 KiB/s
response received in 1.777227244s - 288089.22 KiB/s
response received in 1.868206421s - 274059.66 KiB/s
response received in 1.840970144s - 278114.25 KiB/s
response received in 1.839823122s - 278287.63 KiB/s

Now I wonder what the kernel BBR implementation does differently, since it achieves more than double throughput compared to Quinn + BBR.

Ralith commented 2 years ago

Our BBR implementation (and to a lesser extent even our default Cubic implementation) is not up to date with the latest research. In particular there's some known missed opportunities for faster ramp-up (e.g. HyStart++), which is a function of latency, and may explain why this is more significant on a real link with latency, vs. loopback.

Contributions would be very welcome here. I expect Linux's TCP congestion controller to be fairly readable, and may be a good reference for the state of the art without being too bleeding-edge.