Ysurac / openmptcprouter

OpenMPTCProuter is an open source solution to aggregate multiple internet connections using Multipath TCP (MPTCP) on OpenWrt
https://www.openmptcprouter.com/
GNU General Public License v3.0
1.8k stars 258 forks source link

IPSEC low bandwitdh #802

Closed blainvillem closed 4 years ago

blainvillem commented 4 years ago

Good morning, sir,

I have two ipsec tunnels between two pfsense router's. One that leaves my local network with overthebox and one that leaves my local network with openmptcprouter and both go to an ovh network.

Expected Behavior

I'd like both tunnels to work equally well. With a higher throughput for the tunnel through mptcprouter.

Actual Behavior

The tunnel going through overthebox has a throughput of about 8mo/s and the other one going through openmptcprouter is at 300ko/s.

Specifications

blainvillem commented 4 years ago

EDIT:

When I disable an interface on either WAN on MPTCPROUTER the throughput in my ipsec tunnel becomes good again. I think the problem comes from the mptcp protocol.

Ysurac commented 4 years ago

You can try an other TCP Congestion Control and an other Multipath TCP scheduler in Network->MPTCP

blainvillem commented 4 years ago

You can try an other TCP Congestion Control and an other Multipath TCP scheduler in Network->MPTCP

Hello, I've made all the possible changes with Multipath TCP scheduler and Congestion Control, but nothing improves the situation we remain stuck at about 512kbps.

Do you have another idea? Knowing that the problem can't come from the tunnel an identical tunnel works to overthebox.

blainvillem commented 4 years ago

MPTCP parametres_des_interfaces

slitsevych commented 4 years ago

Hi @blainvillem ! Have you tried to increase fullmesh subflows to 2 or more? Although my infrastructure is not exactly same as yours, we also use pfsense routing. In our case OMR/MPTCP works fine with fullmesh path-manager, 2 fullmesh subflows and "olia" as congestion control mode. We also switched from Glorytun-TCP to Glorytun-UDP and use "Master interface selection - no change" at "cgi-bin/luci/admin/system/openmptcprouter/settings". Besides that we've disabled TCP Fast Open and set up all relevant (LAN,WAN1,WAN2) interfaces as bridges over system eth interfaces:

Screenshot from 2020-01-22 11-20-04

I hope some of these suggestions will be helpful

blainvillem commented 4 years ago

Salut @blainvillem ! Avez-vous essayé d'augmenter les sous-flux fullmesh à 2 ou plus? Bien que mon infrastructure ne soit pas exactement la même que la vôtre, nous utilisons également le routage pfsense. Dans notre cas, OMR / MPTCP fonctionne très bien avec le gestionnaire de chemin d'accès fullmesh , 2 sous-flux fullmesh et " olia " comme mode de contrôle de la congestion. Nous sommes également passés de Glorytun-TCP à Glorytun-UDP et utilisons " Sélection de l'interface principale - pas de changement " dans " cgi-bin / luci / admin / system / openmptcprouter / settings ". En plus de cela, nous avons désactivé TCP Fast Open et configuré toutes les interfaces pertinentes (LAN, WAN1, WAN2) comme ponts sur les interfaces eth du système:

Capture d'écran du 2020-01-22 11-20-04

J'espère que certaines de ces suggestions vous seront utiles

Hello, first of all I wanted to thank you for your help.

I tried to change "fullmesh", 2 sub-flow fullmesh and "olia" but it didn't change anything in terms of throughput in the tunnel, still blocked at about 512kbps.

Concerning Glorytun, I'm already in udp without having changed the default configuration.

For the configuration of LAN / WAN 1 / WAN 2 interfaces in bridge it didn't work, I don't have internet in my local network anymore and here are the errors in the preview :

PS: I couldn't find where to disable "TCP Fast Open".

image

image

image

blainvillem commented 4 years ago

I think you have at least 3 network cards on your mptcprouter while I have only one. My mptcprouter is a virtual machine on a "Vmware" ESX. That's why it's impossible for me to bridge several different cards.

But I can add virtual network cards if needed, if this is the solution to our problem.

After what is weird is the fact that the network throughput works very well from a computer using mptcprouter as default gateway. But also that the tunnel works pretty well with only one of the two WANs enabled on mptprouter. The tunnel blocks at about 512kbps when both interfaces are activated with one master and the other one activates "Mptcp".

slitsevych commented 4 years ago

Hi again! I'm really sorry to hear that none of the suggestions helped, however I would like to add that in our case we also configured mptcprouter as a VM of KVM type created within local Proxmox cluster. Interfaces are being added as virtual NICs: Screenshot from 2020-01-22 13-39-21 All of them are also bridged from corresponding vlan interfaces added manually in /etc/network/interfaces and configured in Pfsense. so it is something like this: vlan20 (pfsense) --> vmbr20 bridge (proxmox node network) --> net2 (as defined in VM hardware page) --> eth2 (as listed in kernel) --> wan2 (as initially in mptcprouter) --> br-wan2 (after creating bridge in OMR Interfaces). We decided to use bridged mode after several tests which showed that in our setup gateways and routes are more stable if we use bridged mode.

Not sure if this helps, but I just wanted to add these small nuances.

p.s: You can disable TCP Fast Open, change master interface selection, etc in Advanced Settings under System --> Openmptcprouter Screenshot from 2020-01-22 13-48-59

blainvillem commented 4 years ago

Thank you. Thank you, I just tried and no change for the "TCP Fast Open" configuration. Concerning the bridges with network cards, I don't know if it will change much since the throughput works very well without aggregating connections to access the tunnel.

I'll wait to hear back from Mr Yannick Chabanois, if you have other proposals, I'd be happy to try.

Especially the ipsec configuration of your pfsense "AES GCM 3DES SHA256 encryption..."

Because even on overthebox or directly from my two ISPs I don't get a very high throughput in the tunnel "3 to 8 mbps maximum" for a connection of several hundred mbps.

blainvillem commented 4 years ago

After several tests during the day, I realized that the problem probably came more from my "pfsense" firewall routers. Or compared to the fact that they are hosted in a vmware esx.

I have 1.5 mbps throughput per mptcprouter and 3.5 mbps per overthebox.

blainvillem commented 4 years ago

https://forum.netgate.com/topic/149905/ipsec-low-throughput

If you want more informations.

blainvillem commented 4 years ago

UP

github-actions[bot] commented 4 years ago

This issue is stale because it has been open 120 days with no activity. Remove stale label or comment or this will be closed in 5 days