multipath-tcp / mptcp

⚠️⚠️⚠️ Deprecated 🚫 Out-of-tree Linux Kernel implementation of MultiPath TCP. 👉 Use https://github.com/multipath-tcp/mptcp_net-next repo instead ⚠️⚠️⚠️
https://github.com/multipath-tcp/mptcp_net-next
Other
889 stars 335 forks source link

Feature Request: increase size of MPTCP_MAX_ADDR #442

Closed Blackyfff closed 3 years ago

Blackyfff commented 3 years ago

Hello dev-community,

my environmet: OpenWRT Homerouter (HR) virtualized is connected to a VPS also running OpenWRT through OpenVPN with MPTCP in full-mesh mode. Kernel 5.4 with mptcp-trunk patch (from around Feb 2021).

As initial connection IP I use a IPv4, connection is initiated from the HR.

The HR has 14 devices. 5 are deactivated for mptcp with NOMULTIPATH Flag. The other are all capable of reaching the VPS. All of them have a IPv6-ULA and some a global routable IPv6.

The pitty for me is, that after a reboot of the HR the last interface comming up is my IPv6 through pppoe via a modem. That is the IP routing with the shortest RTT and (together with an IPv4) the highest bandwidth. But this IPv6 is ignored for the mptcp-connection with:

addr6_event_handler created event for 2003:***, code 1 prio 0 idx 16 mptcp_address_worker no more space

So far I need to set the NOMULTIPATH Flag for more other devices, not to loose my best route to the VPS.

I would really appreciate if that cap would be increased.

Regards Blackyfff

matttbe commented 3 years ago

Hi @Blackyfff

This is a known "limitation" of the current Fullmesh Path Manager implementation. Allowing more increases memory usage for each connection and it looks quite rare to really have this need. Most of the time, devices have max 2 WANs and don't need more than 8 subflows per connection.

If still you are interesting to get more subflows, please read this discussion which also includes code: https://github.com/multipath-tcp/mptcp/issues/406#issuecomment-768304476

Probably best to select the interfaces you want to use in priority with a small script. If one of these interfaces is down, the NOMULTIPATH flag can be removed on a backup one.

I suggest to close this ticket but feel free to re-open it if needed.

Blackyfff commented 3 years ago

Thank you for the answer @matttbe

Please count my environment as one further realistic use case for more than 8 subflows :)

A further problem with this limitation is, if you have multiple seperated networks you want to serve mptcp connections. It may be the case, that some of the first choosed 8 IPv6 addresses are not able to reach the other endpoint. So there are failing connection attempts to those addresses, and the other IPs that are possibly able to connect are ignored. In that case it is not about that many established subflows, it is about getting connection at all.

And many thanks to @arter97 for sharing the patch.

matttbe commented 3 years ago

It sounds like you need more control than what the "generic" fullmesh path manager can allow you to do. A userspace path-manager (mptcpd?) interfacing with the in-kernel Netlink path manager should be more appropriated.

An alternative could be to add a kernel option to allow more IPs managed by the in-kernel PM but that's not a practical thing I think.