multipath-tcp / mptcp

⚠️⚠️⚠️ Deprecated 🚫 Out-of-tree Linux Kernel implementation of MultiPath TCP. 👉 Use https://github.com/multipath-tcp/mptcp_net-next repo instead ⚠️⚠️⚠️
https://github.com/multipath-tcp/mptcp_net-next
Other
890 stars 335 forks source link

Aggregated bandwidth of two paths is larger than the sum of two paths' single TCP bandwidth. #278

Closed Kumius closed 6 years ago

Kumius commented 6 years ago

The kernel is mptcp v0.92. Two computers are connected through a router, running TC to limit the bandwidth, RTT, loss, etc. The client has two network cards, the server has only one, and I run 'iperf3 -s' in server. Host buffer (sender & receiver) is 6M, and path condition is as follows. PATH_A = [bw=30Mbit/s, rtt=20ms, loss=1%, burst=40KB], and the throughput is 5.7Mbit/s (single TCP). PATH_B = [bw=30Mbit/s, rtt=200ms, loss=0%, burst=40KB], and the throughput is 10.4Mbit/s (single TCP). When running MPTCP (default scheduler), the aggregated bandwidth of two paths is 24.8Mbit/s, bigger than (5.7 + 10.4). And the wireshark IO graph shows that PATH_B's throughput in MPTCP is almost 20Mbit/s.

Red line is the PATH_B, and green line is the PATH_A. p 6 6 30 20 1 40 30 200 0 40 -1 v 001

Kumius commented 6 years ago

And, why the throughput of PAHT_B when running MPTCP (two paths, and two flows) could be almost 20Mbit/s? Then if so, what's going on with the fairness between MPTCP and TCP?

cpaasch commented 6 years ago

Are you running the same congestion controls for both? Can you check that you indeed only have a single subflow per interface?

Also, make sure to run your tests for long enough, as the loss can introduce quite some variance.

Kumius commented 6 years ago

There is just one computer as client which has two network cards, and the server has only one NC. CLIENT ===> ROUTER(TC) ---> SERVER Actually, there are two flows for each interface, but only one flow carry the main packets, which probably is caused by IPERF3 itself. Congestion control algorithms are CUBIC for both, and the MPTCP scheduler is the default one. I use iperf3 to test the throughput, and it runs 100 seconds each time. IPTABLES is used to mark the flow by its IP address, and I use TC to limit the links according to the mark.