Closed MoweME closed 2 months ago
The error also occurs if the DSL interface is set as master and STARLINK is set as backup/enabled.
If a connection has the backup flag in multipath, it will only be used if no other connection is active. If you want both to be used simultaneously, you should leave it as enabled.
If a connection has the backup flag in multipath, it will only be used if no other connection is active. If you want both to be used simultaneously, you should leave it as enabled.
Thank you, for your prompt reply! The same issue appears, when selecting WAN1 as master and WAN2 as enabled:
On latest test, both connections seems to be used and then only one. Can you try test again ?
It appears, that only my Starlink is being used:
This is a speedtest of the DSL:
Is there any way we can debug why this behavior appears?
What do you have in Network->MPTCP, "Fullmesh" tab ? And "MPTCP monitoring" tab when you try a speedtest ? Each connection should also be in own subnet
The speedtest is from the screenshot above isn't using the OpenMPTCProuter as gateway (only the DSL/WAN1 modem).
These are my settings and a speedtest using the OpenMPTCProuter as gateway: In my eyes, the speed is too slow. WAN1 should be much faster than on the Bandwidth tab shown. 100 mbps (DSL/WAN1) + 50 mbps (STARLINK/WAN2) = 150 mbps ≠ 64 mbps
Speedtest is not always a very good way to test OpenMPTCProuter real speed (it's sending small packets). You can try nperf.com or "omr-test-speed" via SSH on the router. Check that Shadowsocks-Rust is used as proxy in System->OpenMPTCProuter, "wizard" tab, "advanced settings" checkbox. Check also the encryption used, this can give really bad perf if AES-NI is not available and AES encryption set.
After switching to Shadowsocks-Rust, there is only an error: "Can't get public IP address from ShadowSocks Rust"
Is there any way to check if the script has installed Shadowsocks-Rust correctly?
On the VPS, Shadowsocks-go is used as server.
You can check if it's running using ps aux | grep shadowsocks-go
on the VPS.
If you have a firewall on provider side, check that port 65280 is open.
The service is always stopping due to "This program can only be run on AMD64 processors with v2 microarchitecture support." Do you know if there is any workaround?
root@vps:~# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 40 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: AuthenticAMD
Model name: QEMU Virtual CPU version 2.5+
CPU family: 15
Model: 107
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Stepping: 1
BogoMIPS: 8434.15
Flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush
mmx fxsr sse sse2 ht syscall nx lm rep_good nopl cpuid extd_apicid tsc_known
_freq pni cx16 x2apic hypervisor lahf_lm cmp_legacy 3dnowprefetch vmmcall
Virtualization features:
Hypervisor vendor: KVM
Virtualization type: full
Caches (sum of all):
L1d: 256 KiB (4 instances)
L1i: 256 KiB (4 instances)
L2: 2 MiB (4 instances)
L3: 64 MiB (4 instances)
NUMA:
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerabilities:
Gather data sampling: Not affected
Itlb multihit: Not affected
L1tf: Not affected
Mds: Not affected
Meltdown: Not affected
Mmio stale data: Not affected
Retbleed: Not affected
Spec rstack overflow: Not affected
Spec store bypass: Not affected
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affecte
d
Srbds: Not affected
Tsx async abort: Not affected
root@vps:~# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 40 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: AuthenticAMD
Model name: QEMU Virtual CPU version 2.5+
CPU family: 15
Model: 107
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Stepping: 1
BogoMIPS: 8434.15
Flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm rep_good nopl cpuid extd_apicid tsc_known_fre
q pni cx16 x2apic hypervisor lahf_lm cmp_legacy 3dnowprefetch vmmcall
Virtualization features:
Hypervisor vendor: KVM
Virtualization type: full
Caches (sum of all):
L1d: 256 KiB (4 instances)
L1i: 256 KiB (4 instances)
L2: 2 MiB (4 instances)
L3: 64 MiB (4 instances)
NUMA:
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerabilities:
Gather data sampling: Not affected
Itlb multihit: Not affected
L1tf: Not affected
Mds: Not affected
Meltdown: Not affected
Mmio stale data: Not affected
Retbleed: Not affected
Spec rstack overflow: Not affected
Spec store bypass: Not affected
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Srbds: Not affected
Tsx async abort: Not affected
V2 processor are from about 2009. A too old processor is emulated here. Do you have access to the host ? If it's a proxmox set it to use host CPU instead of KVM64 or anything else.
Switched proxmox CPU emulation type to host. Now the ShadowSocks-Rust is working!
root@vps:~# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 5900X 12-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Stepping: 2
BogoMIPS: 8434.15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good n
opl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor
lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2
erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr wbnoinvd arat npt nrip_save umip pku ospke vaes vpclmu
lqdq rdpid fsrm arch_capabilities
Virtualization features:
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
Caches (sum of all):
L1d: 256 KiB (4 instances)
L1i: 256 KiB (4 instances)
L2: 2 MiB (4 instances)
L3: 64 MiB (4 instances)
NUMA:
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerabilities:
Gather data sampling: Not affected
Itlb multihit: Not affected
L1tf: Not affected
Mds: Not affected
Meltdown: Not affected
Mmio stale data: Not affected
Retbleed: Not affected
Spec rstack overflow: Mitigation; safe RET, no microcode
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Srbds: Not affected
Tsx async abort: Not affected
Unfortunately, the router is still not using the full potential of both uplinks.
root@OpenMPTCProuter:~# omr-test-speed
Select best test server...
host: scaleway.testdebit.info - ping: 35
host: bordeaux.testdebit.info - ping:
host: aix-marseille.testdebit.info - ping:
host: lyon.testdebit.info - ping: 39
host: lille.testdebit.info - ping:
host: paris.testdebit.info - ping: 46
host: appliwave.testdebit.info - ping: 49
host: speedtest.frankfurt.linode.com - ping: 21
host: speedtest.tokyo2.linode.com - ping: 298
host: speedtest.singapore.linode.com - ping: 350
host: speedtest.newark.linode.com - ping: 118
host: speedtest.atlanta.linode.com - ping: 150
host: speedtest.dallas.linode.com - ping: 142
host: speedtest.fremont.linode.com - ping: 214
host: speed.hetzner.de - ping: 39
host: ipv4.bouygues.testdebit.info - ping: 45
host: par.download.datapacket.com - ping: 118
host: nyc.download.datapacket.com - ping: 119
host: ams.download.datapacket.com - ping: 39
host: fra.download.datapacket.com - ping: 31
host: lon.download.datapacket.com - ping: 119
host: mad.download.datapacket.com - ping:
host: prg.download.datapacket.com - ping: 46
host: sto.download.datapacket.com - ping: 126
host: vie.download.datapacket.com - ping: 55
host: war.download.datapacket.com - ping: 55
host: atl.download.datapacket.com - ping: 135
host: chi.download.datapacket.com - ping: 143
host: lax.download.datapacket.com - ping: 200
host: mia.download.datapacket.com - ping: 144
host: nyc.download.datapacket.com - ping: 109
host: speedtest.milkywan.fr - ping: 47
Best server is http://speedtest.frankfurt.linode.com/garbage.php?ckSize=10000, running test:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 222M 0 222M 0 0 5006k 0 --:--:-- 0:00:45 --:--:-- 6147k
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days
Expected Behavior
When downloading a test file (https://nbg1-speed.hetzner.com/10GB.bin), both WAN connections, DSL (100 Mbps) and STARLINK (50-200 Mbps) should be used.
Current Behavior
OpenMPTCProuter uses only one WAN connection. Packages are not split between connections.
Specifications