Ysurac / openmptcprouter

OpenMPTCProuter is an open source solution to aggregate multiple internet connections using Multipath TCP (MPTCP) on OpenWrt
https://www.openmptcprouter.com/
GNU General Public License v3.0
1.72k stars 252 forks source link

OMR not aggregating #3068

Open SonicFM opened 6 months ago

SonicFM commented 6 months ago

Expected Behavior

Together with the VPS, both of my Internet lines should be used via MPTCP. MPTCP Support-Check should show a working state for both links.

Current Behavior

Since the update to 0.60beta1, this is no longer the case. (Of Course updated the VPS to Debian 12 and then executed your script, keys do match.) MPTCP Support-Check states unsupported on both links. Therefore Traffic is only going through the defined master inferface. Im using Shadowsocks as Proxy and Glorytun TCP as VPN.

What am i missing?

Thanks for your work!

Specifications

Ysurac commented 6 months ago

MPTCP support Check is now disabled for 6.1 based release. What do you have in status page ?

xzjq commented 6 months ago

I seem to be experiencing this as well. Understanding that the "Support Check" page is broken, I still do not achieve any aggregated traffic. The bandwidth page shows essentially 0 traffic via the other two WANs, with the only traffic being on the master interface. Status page is fine (all green checkmarks)

The "established connections" page shows only connections using the master interface IP (740 connections currently listed, which seems... high; essentially all are listed as ESTAB).

Is the Fullmesh output broken too?

192.168.12.236 id 36 subflow fullmesh dev eth0.300 
192.168.1.31 id 37 subflow fullmesh dev eth0.400 
98.xxx.xxx.xxx id 38 subflow fullmesh dev eth0.500 

I experienced this lack of aggregation as well on the previous beta of 0.60 using the 6.1 kernel on a different OMR guest & VPS. As noted in #3067, my local OMR hardware/network/WANs aggregate using 5.4 kernel-based MPTCP on a different VPS, but I have yet to get the 6.1 MPTCP based system to aggregate.

Current environment: OMR: Version 0.60beta1-6.1 (loaded via v0.60beta1-6.1-r0+24041-74e7f8ebbd image) VPS: Version 0.1029-test 6.1.0-14-amd64

Ysurac commented 6 months ago

"MPTCP fullmesh" tab is not broken, it doesn't have exactly the same output as 5.4 based release. What are the proxy used ? Do you both use VLAN on interfaces ? Did you do a fresh config or an update of old config ?

SonicFM commented 6 months ago

For me its exactly the same as for @xzjq.

Dashboard shows all green, traffic only through master interface. Fullmesh says ok for both links, but nothing is aggregated.

I only have VLAN on LAN side. So Guest, IoT and LAN. WAN do not have any VLANs.

Proxy is currently V2Ray, but problem stays in no regard of the proxy used.

I updated from 0.59.1-5.4, so i updated the old config. OMR1 OMR2 OMR3

xzjq commented 6 months ago

The OMR and VPS referenced in this ticket are completely fresh; created specifically to test the 6.1 kernel based implementation. This is "fresh out of the box"; OMR is using shadowsocks and is set for OpenVPN for the vpn traffic.

I use VLANs locally, though as mentioned in the other ticket, this 6.1-based OMR guest VM is on the same hardware node as the 5.4-based OMR guest VM that aggregates just fine (i.e. using these VLANs). The prior beta installation (using an OMR vm 0.60beta1-6.1 and a different VPS 0.1029-test 6.1.0-13-amd64 did not aggregate either.

For this current snapshot installation, there are now 1840 connections listed in the "Established Connections" page, all using only the master WAN. This seems like a connection leak.

xzjq commented 6 months ago

There are 3570 connections now on this 6.1-based system (all solely using the master WAN IPv4 address), and this router is not being used for traffic. Status page is still green checkmarks.

logread doesn't seem to show much contributory, except 52 entries of daemon.err /usr/bin/ss-redir[20632]: remote recv: Connection reset by peer (this OMR instance has been up for approximately 36 hrs)

Really seems like a connection leak. Is there an intended limit? I see my 5.4-based system has 57 connections and it has been up for 24 hrs.

Ysurac commented 6 months ago

What is the result of uname -a, sysctl -a | grep mptcp on VPS and in ip mptcp endpoint you public IP of the VPS ? On the router, for aggregation, did you test with omr-test-speed and check with Network->MPTCP, "bandwidth" tab ? For the many connection, it's the second part of the screen I would need: the destination port.

xzjq commented 6 months ago

What is the result of uname -a, sysctl -a | grep mptcp on VPS and in ip mptcp endpoint you public IP of the VPS?

I will provide the data from two separate 6.1-based VPS installs (neither of which aggregate)


First, the one mentioned so far in this ticket, created in the last day or so from the test version of the script. The VPS script was run on a fresh install of debian 12:

Linux hostname 6.1.0-14-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.64-1 (2023-11-30) x86_64 GNU/Linux

net.ipv4.tcp_available_ulp = mptcp tls net.mptcp.add_addr_timeout = 120 net.mptcp.allow_join_initial_addr_port = 1 net.mptcp.checksum_enabled = 0 net.mptcp.enabled = 1 net.mptcp.pm_type = 0 net.mptcp.stale_loss_cnt = 4

ip mptcp endpoint 10.128.128.88 id 1 signal dev ens18 (<= this is NAT'd behind IP masquerade, unlike the other VPS below) fe80::be24:11ff:fe6b:27b id 2 signal dev ens18


Second, the first 6.1-based VPS that I tried that didn't aggregate either. This is based on the script from the beta announcement. The VPS started with debian 11, which the VPS script upgraded:

Linux hostname 6.1.0-13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.55-1 (2023-09-29) x86_64 GNU/Linux

net.ipv4.tcp_available_ulp = mptcp tls net.mptcp.add_addr_timeout = 120 net.mptcp.allow_join_initial_addr_port = 1 net.mptcp.checksum_enabled = 1 net.mptcp.enabled = 1 net.mptcp.pm_type = 0 net.mptcp.stale_loss_cnt = 4

ip mptcp endpoint 38.xx.xx.xx id 1 signal dev eth0


On the router, for aggregation, did you test with omr-test-speed and check with Network->MPTCP, "bandwidth" tab?

Yes. Essentially 0 traffic on the other two WANs (e.g. ~100 Kbps on each while the master WAN has 50 Mbps).

For the many connection, it's the second part of the screen I would need: the destination port.

It has currently pared down the list to "only" 1100 connections right now, all using the WAN master device IPv4. Destination port is 65101.

xzjq commented 6 months ago

Good news: I pointed the new OMR guest VM (based on snapshot) to the older of the two 6.1-based VPS installations (referenced above) and achieved aggregation after I rebooted the OMR (not the VPS).

The Established Connections page showed 4418 connections after a speed test, all using the Master WAN IPv4 (only) and almost all were in ESTAB state (4393 / 4418). All but one was to port 65101 (the other was to 65301).

Maybe 4,000+ active connections is normal via just one of the WANs? It seems like a lot.

Ysurac commented 6 months ago

Port 65101 is Shadowsocks port, you shouldn't have so much connection to it. You are using P2P or really nothing ?

xzjq commented 6 months ago

Established Connections page shows 6371 connections at the moment, all using the master WAN interface IPv4. When I checked last night I did confirm that they were all using unique local ports.

I changed master WAN designation last night and can confirm this is based on the master WAN, not something about the other underlying device/interface (i.e. the connections all changed to the new master WAN IPv4 address, whereas they were all the other master WAN IPv4 before).

No P2P. Just regular home internet use with some client side VPNs, some web browsing/email/etc, and intermittent speedtest.net tests.

xzjq commented 6 months ago

I'm now also encountering another disaggregation issue I experienced before on the 6.1-based beta, where the individual WAN interfaces constantly toggle on/off. The status page shows red X's on one or more interfaces, that are subsequently replaced by green checks, and then that cycles on the next status page refresh.

This was a major reason I initially used the 5.4-based OMR/VPS (i.e. the 6.1-based OMR/VPS status page would not stay "green", whereas 5.4-based would).

The below is using the brand new OMR snapshot VM but it is connected to the older 6.1-based Linux hostname 6.1.0-13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.55-1 (2023-09-29) x86_64 GNU/Linux (i.e. the VPS created weeks ago from the script from the beta announcement, not the snapshot).

Is there a way to upgrade the beta VPS to snapshot, or do I have to destroy the VPS and recreate it?

Right now, eth0.300 and eth0.400 interfaces are both toggling. So, there are hundreds of these messages in logread, approximately one set every 7 seconds. Like the beta OMR/VPS, it didn't always do this. Sometimes it was stable. This was stable last night, but degenerated to this state hours later.


Dec 12 21:26:40 OpenMPTCProuter user.notice post-tracking-001-post-tracking: Reload MPTCP config for eth0.300 Dec 12 21:26:40 OpenMPTCProuter user.notice post-tracking-001-post-tracking: Multipath eth0.300 switched to on (from off) Dec 12 21:26:43 OpenMPTCProuter user.notice post-tracking-001-post-tracking: Reload MPTCP config for eth0.400 Dec 12 21:26:43 OpenMPTCProuter user.notice post-tracking-001-post-tracking: Multipath eth0.400 switched to on (from off) Dec 12 21:26:45 OpenMPTCProuter user.notice post-tracking-001-post-tracking: Reload MPTCP config for eth0.300 Dec 12 21:26:45 OpenMPTCProuter user.notice post-tracking-001-post-tracking: Multipath eth0.300 switched to on (from off) Dec 12 21:26:47 OpenMPTCProuter daemon.err /usr/bin/ss-redir[19269]: remote recv: Connection reset by peer Dec 12 21:26:52 OpenMPTCProuter daemon.err /usr/bin/ss-redir[19270]: remote recv: Connection reset by peer Dec 12 21:26:52 OpenMPTCProuter daemon.err /usr/bin/ss-redir[19271]: remote recv: Connection reset by peer Dec 12 21:26:52 OpenMPTCProuter daemon.err /usr/bin/ss-redir[19270]: remote recv: Connection reset by peer Dec 12 21:26:52 OpenMPTCProuter daemon.err /usr/bin/ss-redir[19270]: remote recv: Connection reset by peer Dec 12 21:26:52 OpenMPTCProuter daemon.err /usr/bin/ss-redir[19270]: remote recv: Connection reset by peer Dec 12 21:26:52 OpenMPTCProuter daemon.err /usr/bin/ss-redir[19270]: remote recv: Connection reset by peer Dec 12 21:26:52 OpenMPTCProuter daemon.err /usr/bin/ss-redir[19270]: remote recv: Connection reset by peer Dec 12 21:26:52 OpenMPTCProuter daemon.err /usr/bin/ss-redir[19271]: remote recv: Connection reset by peer Dec 12 21:26:52 OpenMPTCProuter user.notice post-tracking-001-post-tracking: Reload MPTCP config for eth0.400 Dec 12 21:26:53 OpenMPTCProuter user.notice post-tracking-001-post-tracking: Multipath eth0.400 switched to on (from off) Dec 12 21:26:54 OpenMPTCProuter user.notice post-tracking-001-post-tracking: Reload MPTCP config for eth0.300 Dec 12 21:26:55 OpenMPTCProuter user.notice post-tracking-001-post-tracking: Multipath eth0.300 switched to on (from off)

Ysurac commented 6 months ago

Can you, on the router do a ls -l /sys/class/net/ and multipath ?

xzjq commented 6 months ago

/sys/class/net/ bonding_masters erspan0 -> ../../devices/virtual/net/erspan0 eth0 -> ../../devices/pci0000:00/0000:00:12.0/virtio1/net/eth0 eth0.1024 -> ../../devices/virtual/net/eth0.1024 eth0.300 -> ../../devices/virtual/net/eth0.300 eth0.400 -> ../../devices/virtual/net/eth0.400 eth0.500 -> ../../devices/virtual/net/eth0.500 gre0 -> ../../devices/virtual/net/gre0 gretap0 -> ../../devices/virtual/net/gretap0 ifb4eth0.300 -> ../../devices/virtual/net/ifb4eth0.300 ifb4eth0.400 -> ../../devices/virtual/net/ifb4eth0.400 ifb4eth0.500 -> ../../devices/virtual/net/ifb4eth0.500 ifb4tun0 -> ../../devices/virtual/net/ifb4tun0 ip6gre0 -> ../../devices/virtual/net/ip6gre0 ip6tnl0 -> ../../devices/virtual/net/ip6tnl0 lo -> ../../devices/virtual/net/lo sit0 -> ../../devices/virtual/net/sit0 teql0 -> ../../devices/virtual/net/teql0 tun0 -> ../../devices/virtual/net/tun0


multipath erspan0 is deactivated eth0 is in default mode eth0.1024 is deactivated eth0.300 is deactivated eth0.400 is in default mode eth0.500 is in default mode gre0 is deactivated gretap0 is deactivated ifb4eth0.300 is deactivated ifb4eth0.400 is deactivated ifb4eth0.500 is deactivated ifb4tun0 is deactivated ip6gre0 is deactivated ip6tnl0 is deactivated lo is deactivated sit0 is deactivated teql0 is deactivated tun0 is deactivated


Now, if I do something like watch -n1 multipath I can see eth0.300 and eth0.400 cycling between deactivated and default mode every ~2 seconds. I captured the above during a few second interval while eth0.300. Status page reflects this cyclic condition.

This corresponds to hundreds of logread messages of: Dec 14 22:01:03 OpenMPTCProuter user.notice post-tracking-001-post-tracking: Reload MPTCP config for eth0.300 Dec 14 22:01:04 OpenMPTCProuter user.notice post-tracking-001-post-tracking: Multipath eth0.300 switched to on (from off) Dec 14 22:01:11 OpenMPTCProuter user.notice post-tracking-001-post-tracking: Reload MPTCP config for eth0.400 Dec 14 22:01:11 OpenMPTCProuter user.notice post-tracking-001-post-tracking: Multipath eth0.400 switched to on (from off) Dec 14 22:01:13 OpenMPTCProuter user.notice post-tracking-001-post-tracking: Reload MPTCP config for eth0.300 Dec 14 22:01:13 OpenMPTCProuter user.notice post-tracking-001-post-tracking: Multipath eth0.300 switched to on (from off) Dec 14 22:01:15 OpenMPTCProuter user.notice post-tracking-001-post-tracking: Reload MPTCP config for eth0.400 Dec 14 22:01:16 OpenMPTCProuter user.notice post-tracking-001-post-tracking: Multipath eth0.400 switched to on (from off)

xzjq commented 6 months ago

journalctl on the VPS is displaying a rapid succession of errors, including python stack traces:


Dec 14 21:06:44 vps omr-admin.py[1299]: ERROR: Exception in ASGI application Dec 14 21:06:44 vps omr-admin.py[1299]: Traceback (most recent call last): Dec 14 21:06:44 vps omr-admin.py[1299]: File "/usr/lib/python3/dist-packages/uvicorn/protocols/http/h11_impl.py", line 384, in run_asgi Dec 14 21:06:44 vps omr-admin.py[1299]: result = await app(self.scope, self.receive, self.send) Dec 14 21:06:44 vps omr-admin.py[1299]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dec 14 21:06:44 vps omr-admin.py[1299]: File "/usr/lib/python3/dist-packages/uvicorn/middleware/proxy_headers.py", line 45, in call Dec 14 21:06:44 vps omr-admin.py[1299]: return await self.app(scope, receive, send) Dec 14 21:06:44 vps omr-admin.py[1299]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dec 14 21:06:44 vps omr-admin.py[1299]: File "/usr/local/lib/python3.11/dist-packages/fastapi/applications.py", line 1106, in call Dec 14 21:06:44 vps omr-admin.py[1299]: await super().call(scope, receive, send) Dec 14 21:06:44 vps omr-admin.py[1299]: File "/usr/local/lib/python3.11/dist-packages/starlette/applications.py", line 122, in call Dec 14 21:06:44 vps omr-admin.py[1299]: await self.middleware_stack(scope, receive, send) Dec 14 21:06:44 vps omr-admin.py[1299]: File "/usr/local/lib/python3.11/dist-packages/starlette/middleware/errors.py", line 184, in call Dec 14 21:06:44 vps omr-admin.py[1299]: raise exc Dec 14 21:06:44 vps omr-admin.py[1299]: File "/usr/local/lib/python3.11/dist-packages/starlette/middleware/errors.py", line 162, in call Dec 14 21:06:44 vps omr-admin.py[1299]: await self.app(scope, receive, _send) Dec 14 21:06:44 vps omr-admin.py[1299]: File "/usr/local/lib/python3.11/dist-packages/starlette/middleware/exceptions.py", line 79, in call Dec 14 21:06:44 vps omr-admin.py[1299]: raise exc Dec 14 21:06:44 vps omr-admin.py[1299]: File "/usr/local/lib/python3.11/dist-packages/starlette/middleware/exceptions.py", line 68, in call Dec 14 21:06:44 vps omr-admin.py[1299]: await self.app(scope, receive, sender) Dec 14 21:06:44 vps omr-admin.py[1299]: File "/usr/local/lib/python3.11/dist-packages/fastapi/middleware/asyncexitstack.py", line 20, in call Dec 14 21:06:44 vps omr-admin.py[1299]: raise e Dec 14 21:06:44 vps omr-admin.py[1299]: File "/usr/local/lib/python3.11/dist-packages/fastapi/middleware/asyncexitstack.py", line 17, in call Dec 14 21:06:44 vps omr-admin.py[1299]: await self.app(scope, receive, send) Dec 14 21:06:44 vps omr-admin.py[1299]: File "/usr/local/lib/python3.11/dist-packages/starlette/routing.py", line 718, in call Dec 14 21:06:44 vps omr-admin.py[1299]: await route.handle(scope, receive, send) Dec 14 21:06:44 vps omr-admin.py[1299]: File "/usr/local/lib/python3.11/dist-packages/starlette/routing.py", line 276, in handle Dec 14 21:06:44 vps omr-admin.py[1299]: await self.app(scope, receive, send) Dec 14 21:06:44 vps omr-admin.py[1299]: File "/usr/local/lib/python3.11/dist-packages/starlette/routing.py", line 66, in app Dec 14 21:06:44 vps omr-admin.py[1299]: response = await func(request) Dec 14 21:06:44 vps omr-admin.py[1299]: ^^^^^^^^^^^^^^^^^^^ Dec 14 21:06:44 vps omr-admin.py[1299]: File "/usr/local/lib/python3.11/dist-packages/fastapi/routing.py", line 274, in app Dec 14 21:06:44 vps omr-admin.py[1299]: raw_response = await run_endpoint_function( Dec 14 21:06:44 vps omr-admin.py[1299]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dec 14 21:06:44 vps omr-admin.py[1299]: File "/usr/local/lib/python3.11/dist-packages/fastapi/routing.py", line 193, in run_endpoint_function Dec 14 21:06:44 vps omr-admin.py[1299]: return await run_in_threadpool(dependant.call, *values) Dec 14 21:06:44 vps omr-admin.py[1299]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dec 14 21:06:44 vps omr-admin.py[1299]: File "/usr/local/lib/python3.11/dist-packages/starlette/concurrency.py", line 41, in run_in_threadpool Dec 14 21:06:44 vps omr-admin.py[1299]: return await anyio.to_thread.run_sync(func, args) Dec 14 21:06:44 vps omr-admin.py[1299]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dec 14 21:06:44 vps omr-admin.py[1299]: File "/usr/local/lib/python3.11/dist-packages/anyio/to_thread.py", line 33, in run_sync Dec 14 21:06:44 vps omr-admin.py[1299]: return await get_asynclib().run_sync_in_worker_thread( Dec 14 21:06:44 vps omr-admin.py[1299]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dec 14 21:06:44 vps omr-admin.py[1299]: File "/usr/local/lib/python3.11/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread Dec 14 21:06:44 vps omr-admin.py[1299]: return await future Dec 14 21:06:44 vps omr-admin.py[1299]: ^^^^^^^^^^^^ Dec 14 21:06:44 vps omr-admin.py[1299]: File "/usr/local/lib/python3.11/dist-packages/anyio/_backends/_asyncio.py", line 807, in run Dec 14 21:06:44 vps omr-admin.py[1299]: result = context.run(func, *args) Dec 14 21:06:44 vps omr-admin.py[1299]: ^^^^^^^^^^^^^^^^^^^^^^^^ Dec 14 21:06:44 vps omr-admin.py[1299]: File "/usr/local/bin/omr-admin.py", line 2277, in openvpn Dec 14 21:06:44 vps omr-admin.py[1299]: initial_md5 = hashlib.md5(file_as_bytes(open('/etc/openvpn/tun0', 'rb'))).hexdigest() Dec 14 21:06:44 vps omr-admin.py[1299]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dec 14 21:06:44 vps omr-admin.py[1299]: FileNotFoundError: [Errno 2] No such file or directory: '/etc/openvpn/tun0'

(there is indeed no such file, though an /etc/openvpn/tun0.conf file does exist)


And also these curious errors:

Dec 14 21:06:44 vps omr-service[2206466]: Error: Nexthop has invalid gateway.

Dec 14 21:20:05 vps ss-server[777]: getpeername: Transport endpoint is not connected


As well as hundreds of these; as you can see, several per second:

Dec 14 21:06:44 vps ss-server[1389]: server recv: Connection reset by peer Dec 14 21:06:44 vps ss-server[1389]: server recv: Connection reset by peer Dec 14 21:06:45 vps ss-server[1389]: server recv: Connection reset by peer Dec 14 21:06:45 vps ss-server[1389]: server recv: Connection reset by peer Dec 14 21:06:45 vps ss-server[1389]: server recv: Connection reset by peer Dec 14 21:06:46 vps ss-server[1389]: server recv: Connection reset by peer Dec 14 21:06:46 vps ss-server[1389]: server recv: Connection reset by peer Dec 14 21:06:47 vps ss-server[1389]: server recv: Connection reset by peer Dec 14 21:06:47 vps ss-server[1389]: server recv: Connection reset by peer Dec 14 21:06:47 vps ss-server[1389]: server recv: Connection reset by peer

Ysurac commented 6 months ago

For omr-admin error, it was fixed a few weeks ago, launch VPS snapshot install script again. For ss-server, it's due to a connection cancelled from router side. For multipath, I've added more log on latest snapshot that may help to find why you have this on/off cycle.

SonicFM commented 6 months ago

Hi,

i updated the router and VPS to the latest snapshot releases. Sadly without improvement. Traffic is still going only over one of my two WANs.

What am i missing? :/

Hope you had a wonderful Christmas! All the best for the upcoming year!

Ysurac commented 6 months ago

Did you try to change master interface ? Did you try to set MPTCP over VPN on one interface ?

SonicFM commented 6 months ago

I have tried both. Nothing has changed. Except that after changing the master interface, the traffic now runs over this WAN (as expected). But still no aggregation of the two WANs.

With 5.4 kernel on router and VPS everything was fine.

github-actions[bot] commented 3 months ago

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days

SonicFM commented 3 months ago

Hello,

after updating OMR and VPS to 6.1rc2 omr-bypass finaly works.

But the main problem still persists. I cannot achieve aggregation. Traffic is ONLY going through one WAN at a time.

I cannot find the cause for this. Can you please help me to find it?

WAN1 -> Metric=3 = The WAN which the Traffic is going through. WAN2 -> Metric=4 = The WAN which nothing is going through.

However Network -> MPTCP -> MPTCP Fullmesh is showing: 192.168.2.2 id 33 subflow fullmesh dev eth4 192.168.3.2 id 34 subflow fullmesh dev eth3

Where "eth3" is WAN2 and "eth4" is WAN1.

Shouldnt "id" be equivalent to the metric of the WANs?

Ysurac commented 3 months ago

id is not related to metric. Did you try to do a fresh router config ?

SonicFM commented 3 months ago

Ok, thank you.

Yes i did. Already reinstalled OMR + VPS without importing an old config.

Ysurac commented 3 months ago

What is the result of omr-test-speed, omr-test-speed eth3 and omr-test-speed eth4 via SSH on the router (you can stop after 2 minutes with Ctrl+C) ? All is green in Status page ? Can you put a screenshot of it ?

SonicFM commented 3 months ago

Screenshots as wished.

However the behaivior of omr-test-speed is absolutly strange...

Both tests for eth3 and eth4 dont even start downloading... And for "omr-test-speed" it starts but immediately stops and the average download drops over time to zero...

But if i download https://nbg1-speed.hetzner.com/1GB.bin via Browser traffic goes over WAN1 and also with full speed. But still nothing on WAN2.

Statuspage is all green and Internet in general is working, yet only over one WAN. And in Network -> MPTCP -> established connections everthing only shows 192.168.2.2 (WAN1, eth4)

OMR1 OMR2 OMR3 OMR4

github-actions[bot] commented 5 days ago

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days