multipath-tcp / mptcp

⚠️⚠️⚠️ Deprecated 🚫 Out-of-tree Linux Kernel implementation of MultiPath TCP. 👉 Use https://github.com/multipath-tcp/mptcp_net-next repo instead ⚠️⚠️⚠️
https://github.com/multipath-tcp/mptcp_net-next
Other
889 stars 335 forks source link

Weird Default Scheduler behavior in Mininet #446

Closed mdhuynh closed 2 years ago

mdhuynh commented 2 years ago

When I set up a simple experiment in Mininet with two hosts (client with two interfaces and server) and 4 switches and configure routing manually for MPTCP. However, the default scheduler is sending packets into two paths when I use iperf. Is it normal? Also, default value of fullmesh creates 4 subflows, is it divided 2 for each interface. And when fullmesh == 2, the number of subflows is 8 and server has 4 subflows. Is this normal? Thank you very much!

matttbe commented 2 years ago

However, the default scheduler is sending packets into two paths when I use iperf. Is it normal?

Yes, the default scheduler is sending packets over multiple paths if any and if required: in short, it sends data to the subflow with the lowest estimated RTT first. If there is "no more room" to send data on this path, the next subflow is used, etc.

Also, default value of fullmesh creates 4 subflows, is it divided 2 for each interface.

It creates 4 paths because both the client and the server have 2 "public" IPs. So with a "full-mesh" method, it will create 2*2 subflows.

And when fullmesh == 2, the number of subflows is 8 and server has 4 subflows. Is this normal?

Which "2" are you talking about? With fullmesh, it will create all possible paths. If you have v4 and v6 on each interface in your setup, you will have 22 (v4) + 22 (v6) subflows.

Does it answer your questions? Can we close this ticket?

mdhuynh commented 2 years ago

Thank you for your kind reply! I still have some questions regarding LowRTT and FullMesh:

Thank you mptcp.zip !

matttbe commented 2 years ago

So I understand the LowRTT only sends to one path at any given time with the lowest RTT, and it switches if the cwnd is filled in one subflow. I have a mininet with two paths, first path with 40 RTT and second path with 80 RTT (attached code in the zip file). However, whenever the second interface is joined with the MPTCP session (attached wireshark pcap in zip file), why two paths continuous sending (like redundant scheduler theory)

If your app sends a bunch of data -- like iPerf does -- the cwnd of the path with the lowest RTT will quickly be full. Then the second subflow's cwnd will be filled up, etc. At the end, both cwnd will grow to a certain point and, depending on the configured buffers, both subflows will be used "at the same time". MPTCP packets scheduler will queue packets on both subflows.

I understand that fullmesh creates 2 subflows for each interface by default.

Not necessary: the client creates a mesh by using all usable IP it has to all advertised IP of the server. In other words, in your setup with both the client having 2 IPs, A and B, if you set the Fullmesh PM on the server, it will advised its second IP in your setup. On the client side, the PM will try to create 4 subflows: AA, AB, BA, BB. In your case, I guess there is no route to get all combinations so only AA and BB are created.

And by wireshark, I can see that it creates 4 subflows immediate even when interface 2 IP is not advertised yet.

I only see 2 TCP streams: 2 SYN and 2 SYN+ACK. I can also see some ICMP messages, I guess because some routes are not possible (AB/BA).

mdhuynh commented 2 years ago

This helps explain a lot. Thank you very much! I appreciate your time.