multipath-tcp / mptcp

⚠️⚠️⚠️ Deprecated 🚫 Out-of-tree Linux Kernel implementation of MultiPath TCP. 👉 Use https://github.com/multipath-tcp/mptcp_net-next repo instead ⚠️⚠️⚠️
https://github.com/multipath-tcp/mptcp_net-next
Other
887 stars 334 forks source link

Iperf doesn't increase bandwidth #175

Open delinage opened 7 years ago

delinage commented 7 years ago

I have installed MPTCP on a client and on a server (both ubuntu 16.04), I've configured the routing tables (I have 2 wifi interfaces) and I have tested (with ifstat) that all interfaces are being used when I do an iperf connection.

My problem is that if do: iperf -s iperf -c 10.0.0.3 I get better bandwidth when I'm using just 1 interface for the client and 1 for the server than when I use all of them. If the protocol works, I should be getting better bandwith, but it is not the case... so, I wonder if I have to use a specific iperf configuration or there's something wrong?

GinesGarcia commented 7 years ago

Have you checked that you are sending traffic using both interfaces? for example with tcpdump? if so, What scheduler and congestion control are you using?

Regards, Ginés.

2017-04-20 19:59 GMT+02:00 delinage notifications@github.com:

I have installed MPTCP on a client and on a server (both ubuntu 16.04), I've configured the routing tables (I have 2 wifi interfaces) and I have tested (with ifstat) that all interfaces are being used when I do an iperf connection.

My problem is that if do: iperf -s iperf -c 10.0.0.3 I get better bandwidth when I'm using just 1 interface for the client and 1 for the server than when I use all of them. If the protocol works, I should be getting better bandwith, but it is not the case... so, I wonder if I have to use a specific iperf configuration or there's something wrong?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/multipath-tcp/mptcp/issues/175, or mute the thread https://github.com/notifications/unsubscribe-auth/AFeFJT2u73AsDXFMkPQE3TPI278ttAbrks5rx50GgaJpZM4NDX6v .

delinage commented 7 years ago

I have checked it, yes. And I'm using the default one (scheduler).

GinesGarcia commented 7 years ago

ok, I have been using iperf without any specific configuration (MPTCP should be transparent to the upper layers).

have you got packet traces of those experiments?

Regards, Ginés.

2017-04-21 11:58 GMT+02:00 delinage notifications@github.com:

I have checked it, yes. And I'm using the default one (scheduler).

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/multipath-tcp/mptcp/issues/175#issuecomment-296148995, or mute the thread https://github.com/notifications/unsubscribe-auth/AFeFJYbw53vIkypDoPPJraBvtApTdhgIks5ryH2ygaJpZM4NDX6v .

delinage commented 7 years ago

I have done wireshark captures of them, is that what you mean?

GinesGarcia commented 7 years ago

yes, maybe in can help you if you share it :)

2017-04-21 12:43 GMT+02:00 delinage notifications@github.com:

I have done wireshark captures of them, is that what you mean?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/multipath-tcp/mptcp/issues/175#issuecomment-296158561, or mute the thread https://github.com/notifications/unsubscribe-auth/AFeFJRvnb8H--nlU1NxBGS2PqWbJU9aGks5ryIhhgaJpZM4NDX6v .

delinage commented 7 years ago

I've compressed them so I could upload them.

This one: iperf_MPTCP_test3.zip It is a capture on the server of 2 iperf sessions using 2 subflows (10.0.0.1 and 10.0.0.2 are the client while 10.0.0.3 and 10.0.04 are the server)

And this one: iperf_1Wifi1Eth_Test2-3.zip It is a capture with just one subflow.

GinesGarcia commented 7 years ago

First test: from 10.0.0.1:49035 -> 10.0.0.3:5001 from 10.0.0.1:33251 -> 10.0.0.4:5001 from 10.0.0.2:41297 -> 10.0.0.3:5001 from 10.0.0.2:59494 -> 10.0.0.4:5001

Second test: from 10.0.0.1:51455 -> 10.0.0.3:5001

Have you compared one connection with two subflows (first test without one iperf) with the second one? If so, which is (more or less) the RTT of both wireless links?

delinage commented 7 years ago

Do you mean compare this test: iperf_1Wifi2Eth_Tests.zip with the last test, isn't it?

I still get better results with 1 subflow than with 2 subflows...

I don't know the RTT right now, but I can calculate it in the wireshark captures, isn't it?

delinage commented 7 years ago

From 10.0.0.1 to 10.0.0.3 is 0.000041777s From 10.0.0.1 to 10.0.0.4 is 0.000030349s

More or less in that last capture.

GinesGarcia commented 7 years ago

ok, The RTT is more or less similar, so you are not suffering head of line blocking problems.

Regarding the last capture (iperf_1Wifi2Eth_Tests.zip), are you using two subflows over the same physical interface (interface with 10.0.0.1) through 2 disjoint paths?

I'm a litte bit lost about what do you want to achieve. I suggest you to run a simple experiment: Configure MPTCP in your client with:

delinage commented 7 years ago

Regarding to the first part: I've set up a server and a client in two different laptops and I'm connecting them through routers and switches in order to test if MPTCP is useful (throughput wise). But the tests I'm making show me that 1 subflow is better than multiples subflows, so, that's not what I was expecting to get, that's not what the documentation of the implementation says, this protocol is supposed to increase throughput in these conditions... that's what I'm trying to get and that's why I am confused.

Then, in the first test (iperf_MPTCP_test3.zip) I enabled 2 ip addresses on the server and 2 on the client, expecting to get 2 subflows, but I got 4, because it seems that the protocol in fullmesh options make every possible connection.

That's why I turned down an interface on (iperf_1Wifi2Eth_Tests.zip) to get just 2 subflows. Because I don't know how to force the implementation to create just 2 physical subflows with 2ips on the server and 2 ips on the client.

Later, I compared these 2 test with the test with 1 subflow (iperf_1Wifi1Eth_Test2-3.zip) and I saw how the results were not as I expected. So I came here to ask.

Regarding to the second part:

:~$ ftp ftp.multipath-tcp.org ftp: connect to address 130.104.230.45: Connection timed out ftp: connect: Network is unreachable

It seems like the server is not working at the moment...

delinage commented 7 years ago

I've realized that when I use multiple subflows, iperf doesn't fill them up, so that might be the cause of the problem. Is there any mean to generate (more) traffic faster on iperf?

ghostli123 commented 7 years ago

Hi, I have a similiar issue when using mptcp 0.90 in Ubuntu 14.04.

Two subnets in total. One subnet for 1 MPTCP subflow. Two MPTCP subflows are established for data transmission. Iperf is used for data transmission at application layer.

However, overall throughput of mptcp is not as good as single tcp. Moreover, mptcp throughput is not stable, ranging from 250mbps to 550mbps (average throughput for a 30-second iperf test).

Thus, I have two questions: 1) why mptcp throughput is not stable; 2) in what case, mptcp throughput is not as good as regular tcp thoughput. Thank you!

yannisT commented 7 years ago

Hi all,

I am using the latest MPTCP version from git on a LAN testbed where ethernet switches form disjoint paths among 1 multihomed server and 2 multihomed clients.

During some tests I witnessed contradicting throughput estimations of several (monitoring) tools such as iperf, cbm, /proc/net/dev, netstat or even scp. In specific, when performing in isolation, MPTCP fully utilizes network resources according to all monitoring tools. Nevertheless, when MPTCP competes for BW with unicast connections, iperf and cbm display poor performance of MPTCP compared to unicast connections, while /proc/net/dev and netstat report the expected performance superiority. I validated the BW superiority of MPTCP in both cases by transferring actual data via scp, so I assume that iperf and cbm use some system calls that may not be completely compatible with MPTCP implementation. Could this be true?

Best Regards, Yannis

yosephmaloche commented 6 years ago

Hello, I am facing kind of similar issue. Please, is there anyone who can help? As mentioned previously I created two IP adresses for single host(Eth0 and Eth1). I used 4 hosts and 4 switches. It is ring topology h1 h2 h3 h4 from left to right circular. I used SDN controller to control the flows on the links. When I check it on GUI of the controller it creates 8 hosts.

When I send packet from h1 to h2 and h3 to h4 The MPTCP throughput is as expected which is nearly double. However, when I send packet from h1 to h3 and h2 to h4 it is totally bad. I was confused. After I added switches and hosts, Here is how I created the links. ........................................................................... info( '* Add links\n') linkProp = {'bw':2,'delay':'10ms'} net.addLink(s1, s2, cls=TCLink, linkProp) net.addLink(s2, s3, cls=TCLink, linkProp) net.addLink(s3, s4, cls=TCLink, linkProp) net.addLink(s4, s1, cls=TCLink, linkProp) net.addLink(h1, s1, cls=TCLink, linkProp) net.addLink(h1, s4, cls=TCLink, linkProp) net.addLink(h2, s1, cls=TCLink, linkProp) net.addLink(h2, s2, cls=TCLink, linkProp) net.addLink(h3, s2, cls=TCLink, linkProp) net.addLink(h3, s3, cls=TCLink, linkProp) net.addLink(h4, s3, cls=TCLink, linkProp) net.addLink(h4, s4, cls=TCLink, **linkProp)

info( '*** Starting network\n') ................................................................................ Please, don't hastate to comment. I don't know What I am missing I attached what it looks like in ONOS controller screenshot from 2018-01-15 03-40-17