Open leidemon opened 2 years ago
Hello,
There are many reasons you can be limited. A wild guess here because the client is on a router: try to disable mptcp_checksum
on both the client and server and use IPerf3 with the -Z
option.
For more ideas: https://multipath-tcp.org/pmwiki.php?n=Main.50Gbps
Hello,
There are many reasons you can be limited. A wild guess here because the client is on a router: try to disable
mptcp_checksum
on both the client and server and use IPerf3 with the-Z
option.For more ideas: https://multipath-tcp.org/pmwiki.php?n=Main.50Gbps
thank you for reply! I close the checksum,it can up to 2 times,but it still lower than normal.
root@stepclient:~# echo 0 > /proc/sys/net/mptcp/mptcp_checksum
root@stepclient:~# iperf3 -c 192.168.9.31 -Z
Connecting to host 192.168.9.31, port 5201
[ 5] local 192.168.9.123 port 49759 connected to 192.168.9.31 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.03 sec 12.4 MBytes 101 Mbits/sec 0 14.1 KBytes
[ 5] 1.03-2.05 sec 12.5 MBytes 103 Mbits/sec 0 14.1 KBytes
[ 5] 2.05-3.08 sec 12.7 MBytes 104 Mbits/sec 0 14.1 KBytes
[ 5] 3.08-4.07 sec 11.2 MBytes 96.1 Mbits/sec 0 14.1 KBytes
[ 5] 4.07-5.04 sec 10.5 MBytes 90.1 Mbits/sec 0 14.1 KBytes
[ 5] 5.04-6.08 sec 11.2 MBytes 90.7 Mbits/sec 0 14.1 KBytes
[ 5] 6.08-7.02 sec 10.0 MBytes 89.2 Mbits/sec 0 14.1 KBytes
[ 5] 7.02-8.06 sec 11.2 MBytes 90.6 Mbits/sec 0 14.1 KBytes
[ 5] 8.06-9.03 sec 11.2 MBytes 97.8 Mbits/sec 0 14.1 KBytes
[ 5] 9.03-10.01 sec 10.7 MBytes 92.1 Mbits/sec 0 14.1 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.01 sec 114 MBytes 95.4 Mbits/sec 0 sender
[ 5] 0.00-10.04 sec 114 MBytes 95.1 Mbits/sec receiver
iperf Done.
root@stepclient:~# iperf3 -c 192.168.9.31 -Z
Connecting to host 192.168.9.31, port 5201
[ 5] local 192.168.9.123 port 49763 connected to 192.168.9.31 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.01 sec 17.9 MBytes 149 Mbits/sec 0 14.1 KBytes
[ 5] 1.01-2.01 sec 23.5 MBytes 197 Mbits/sec 0 14.1 KBytes
[ 5] 2.01-3.00 sec 26.1 MBytes 220 Mbits/sec 0 14.1 KBytes
[ 5] 3.00-4.01 sec 19.2 MBytes 160 Mbits/sec 0 14.1 KBytes
[ 5] 4.01-5.00 sec 20.9 MBytes 176 Mbits/sec 0 14.1 KBytes
[ 5] 5.00-6.00 sec 10.7 MBytes 89.8 Mbits/sec 0 14.1 KBytes
[ 5] 6.00-7.01 sec 26.3 MBytes 218 Mbits/sec 0 14.1 KBytes
[ 5] 7.01-8.01 sec 20.2 MBytes 170 Mbits/sec 0 14.1 KBytes
[ 5] 8.01-9.00 sec 26.1 MBytes 220 Mbits/sec 0 14.1 KBytes
[ 5] 9.00-10.01 sec 27.8 MBytes 232 Mbits/sec 0 14.1 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.01 sec 219 MBytes 183 Mbits/sec 0 sender
[ 5] 0.00-10.05 sec 218 MBytes 182 Mbits/sec receiver
I will check the link.
And you disabled it on both the client and the server?
If yes, you will need to analyze why you have his limitation. CPU? HW acceleration? GRO/TSO? Too many subflows taking too much resources? big enough TCP [rw]mem buffers? etc.
And you disabled it on both the client and the server?
If yes, you will need to analyze why you have his limitation. CPU? HW acceleration? GRO/TSO? Too many subflows taking too much resources? big enough TCP [rw]mem buffers? etc.
Yes, I disabled it on both. Subflows is one in the fullmesh default setup? I will check them that you said above,but I think is ok with CPU(Mtk7621),because I test it well without mptcp disabled.
Maybe good to start with MPTCP and only one subflow: having the two hosts configured with the "default" PM:
sysctl -w net.mptcp.mptcp_path_manager=default
But I want to test the fullmesh PM with VPS; now I can reach almost 400Mbit(wan:200Mbit wanb:200Mbit) with iperf3 tools;but it also test worst in VPS download;
I encountered the problem, too. And I found the difference between the two cases was the size of CWND. Are there any solutions for this problem?
@yulinjian is it because the window doesn't grow? Or the max size is too low? Did you try to play with net.ipv4.tcp_wmem
(sender) and net.ipv4.tcp_rmem
(receiver) sysctl?
@matttbe Thanks for your reply. And I found the difference between the two cases was the size of CWND when I browsed the information obtained from iperf3. But when I captured the packets with tcpdump and found that maybe the main cause was the size of packets. The payload of TCP could be more than 10000 Bytes but the mptcp was about 1500 Bytes. Notes: My test was based on two dockers connected with veth-pair, and TC was used for rate control. The link rate was 1Gbps and the delay was 20ms.
@yulinjian Interesting.
GRO/TSO should work well. Or did you enable MPTCP Checksum (sysctl net.mptcp.mptcp_checksum
)?
Which kernel are you using?
@matttbe I used the kernel Linux-4.19.243 and I enabled the MPTCP Checksum for the above test. Just now I disabled it, the throughput growed a little but it was still smaller than that of TCP.
@yulinjian there can be many reason limiting the throughput.
Often, the best is to try with multiple parallel connections in download (e.g. iperf3 -c <server> -RZP 10
) to reduce the impact of lossy links and limited buffers. But the best is to analyse traces to see where is the bottleneck (sender, receiver, network in between) and work around that.
Low throughput can be due to buffer sizes, CPU limitations, NIC configuration, network env (losses, bufferbloat, ...), bugs (e.g. not having GRO/TSO while you have it with TCP, wrong scheduler decisions) and more. Analysing that takes a bit of time but there are many tools available to do that.
@matttbe Thanks for your suggestion, and I'll try above methods to check.
hi,mptcp team: I have a mptcp test between the openwrt router(v0.94) and the ubuntu 20.04(v0.95) with iperf3 tool , ubuntu 20.04 is server,the issue is when I enable the mptcp ,the wan,eth0.1 only can test to 130Mbits/sec(bitrate),an if I close the mptcp_enabled ,it can reach 674 Mbits/sec;
I use the setup as follow:
It low down the ether network bitrate,can you give me some advices ?