Open yu-re-ka opened 2 years ago
Our BBR implementation (and to a lesser extent even our default Cubic implementation) is not up to date with the latest research. In particular there's some known missed opportunities for faster ramp-up (e.g. HyStart++), which is a function of latency, and may explain why this is more significant on a real link with latency, vs. loopback.
Contributions would be very welcome here. I expect Linux's TCP congestion controller to be fairly readable, and may be a good reference for the state of the art without being too bleeding-edge.
Similar to #688, but I can provide some more details
Kernel TCP, set to BBR congestion control
latency without load
Raw UDP sending 1000Mbps
Raw UDP sending 500Mbps
Quinn, file size 50M
Quinn + BBR, file size 200M
Quinn + BBR, file size 200M, loopback
Now I wonder what the kernel BBR implementation does differently, since it achieves more than double throughput compared to Quinn + BBR.