multipath-tcp / mptcp

⚠️⚠️⚠️ Deprecated 🚫 Out-of-tree Linux Kernel implementation of MultiPath TCP. 👉 Use https://github.com/multipath-tcp/mptcp_net-next repo instead ⚠️⚠️⚠️
https://github.com/multipath-tcp/mptcp_net-next
Other
889 stars 335 forks source link

MPTCP doesn't establish new subflows in mininet #456

Closed tob-00 closed 2 years ago

tob-00 commented 2 years ago

Hello everyone:) I am facing some problems using MPTCP and mininet together. I am running Ubuntu 20.04 in a VM. I have installed Mininet as well as MPTCP. Running sudo dmesg | sudo grep MPTCP returns MPTCP: stable release v0.95.1 and sudo mn --version returns 2.3.0 which means both should be installed fine. I also added the auto routing files as described here: (https://multipath-tcp.org/pmwiki.php/Users/ConfigureRouting) . If I run my Topo 2_hosts.py, I cannot capture any traffic using Wireshark. For my topo 2_hosts_n_switches.py I see MPTCP traffic but it does not establish new subflows. The weird thing though is that if I run iperf client server on the two hosts by using xterm h1 h2 then MPTCP indeed establishes new subflows. I appreciate any help. I don't know why the first script doesn't show any MPTCP traffic and neither why MPTCP in general doesn't establish new subflows. I send my source code enclosed. greetings, Tobias

src.zip

matttbe commented 2 years ago

Hi Tobias,

I also added the auto routing files

Just to be sure, do you have Network Manager running in each node (network namespace)? It is probably required to configure the routing in each node manually. Is it not what you are doing in fact?

If I run my Topo 2_hosts.py, I cannot capture any traffic using Wireshark

How/Where do you capture the traffic with Wireshark? Typically tcpdump is used from each node, no? Anyway, probably best to look at the routes if they are OK or not. Do you have traffic generated between your two hosts or an error to connect?

For my topo 2_hosts_n_switches.py I see MPTCP traffic but it does not establish new subflows.

What's your Path Manager? Do you not use "fullmesh"?

tob-00 commented 2 years ago

Hi matttbe, thank you for your fast response! I used the auto configuration files from MPTCP and using iperf to test the bandwidth. I managed to establish multiple subflows (because I missed to wait a second between server & client iperf call) but I don't get the whole bandwidth. I am using fullmesh and lia. I tried default and redundant as path manager and i got 1.27MBit/s with default and 230KBit/s with redundant. But shouldnt I get around 3MBit/s with the default scheduler when I have three subflows with 1MBit each? I capture "any" interface with Wireshark and only MPTCP packets and now I can see everything, only the bandwidth should be higher, or? This is my code:

!/usr/bin/python

import os import time from mininet.topo import Topo from mininet.net import Mininet from mininet.util import dumpNodeConnections from mininet.log import setLogLevel from mininet.cli import CLI from mininet.link import TCLink

class CustomTopo(Topo):

def build(self, n = 2):
    h1 = self.addHost('h1', ip = '10.0.1.1')
    h2 = self.addHost('h2', ip = '10.0.2.1')

    for i in range(n):
       switch = self.addSwitch('s%s' % (i+1))
       self.addLink(h1, switch, cls = TCLink, bw = 1,  delay='10ms')
       self.addLink(switch, h2, cls = TCLink, bw = 1)

def create(n): print(' building mininet topology') topo = CustomTopo(n) net = Mininet(topo, link = TCLink) net.start() print(' mininet created successfully')

 print('*** configure network')
 h1 = net.get('h1')
 h2 = net.get('h2')     
 config_network(net, h1, h2, n)

 print('*** routing table for h1')
 print(h1.cmd('ip route'))
 print('*** routing table for h2')
 print(h2.cmd('ip route'))

 print('*** running client-server application')
 #h1.cmd('python3 ./apps/server.py &> ./logs/server.log &')
 #time.sleep(2)
 #h2.cmd('python3 ./apps/client.py &> ./logs/client.log')
 h1.cmd('iperf -s &> ./logs/server.log &')
 time.sleep(2)
 h2.cmd('iperf -i 0.5 -n 3M -c 10.0.1.1 &> ./logs/client.log')

 #link failures
 #time.sleep(5)
 #h1.intf('h1-eth0').ifconfig('down')
 #time.sleep(5)
 #h1.intf('h1-eth0').ifconfig('up')

 print('*** communication completed')

 CLI(net)

 net.stop()

def config_network(net, h1, h2, n): for i in range(n):
h1.setIP('10.0.1.' + str(i+1), intf='h1-eth' + str(i)) h2.setIP('10.0.2.' + str(i+1), intf='h2-eth' + str(i))

if name == 'main': n = 3 setLogLevel('info')
create(n)

or as file in here: 2_hosts_n_switches.zip

matttbe commented 2 years ago

There can be a lot of reasons (CPU, buffers, network config, IO, etc.) not to use the whole BW.

Best is to start with Fullmesh and Cubic for MPTCP and transfer data for a few seconds (min: iperf3 -t 3 -Z -c (...)).

If you still have issues, you might have to analyse packet traces to see if you are limited by the sender or receiver.

Check also the behaviour with plain TCP and maybe modify your system config to reach higher BW: disable MPTCP's checksum, etc. → http://multipath-tcp.org/pmwiki.php?n=Main.50Gbps (but likely it is not an issue for you to reach 3MBps in your VM except if the VM has no acceleration enabled

matttbe commented 2 years ago

Feel free to re-open this issue if you have new info to share related to that.