angt / glorytun

Multipath UDP tunnel
BSD 2-Clause "Simplified" License
675 stars 103 forks source link

Zero rx rate on server #59

Closed legolas108 closed 4 years ago

legolas108 commented 4 years ago

Using latest version 0.3.1 to bond 3 4G LTE lines, local and cloud server both minimal Ubuntu 18.04 servers.

Tunnel seems to be starting OK, but when attempting to download a file it quickly slows down to a halt. Then glorytun path on cloud server (IP 11.22.33.44) displays:

path UP
  status:  OK
  bind:    11.22.33.44 port 5000
  public:  11.22.33.44 port 5000
  peer:    12.34.56.78 port 11358
  mtu:     1400 bytes
  rtt:     86.420 ms
  rttvar:  6.151 ms
  rate:    fixed
  beat:    100 ms
  tx:
    rate:  1000000 bytes/sec
    loss:  0 percent
    total: 468 packets
  rx:
    rate:  0 bytes/sec
    loss:  0 percent
    total: 409 packets
path UP
  status:  OK
  bind:    11.22.33.44 port 5000
  public:  11.22.33.44 port 5000
  peer:    23.45.67.89 port 16041
  mtu:     1400 bytes
  rtt:     88.281 ms
  rttvar:  4.608 ms
  rate:    fixed
  beat:    100 ms
  tx:
    rate:  1000000 bytes/sec
    loss:  0 percent
    total: 526 packets
  rx:
    rate:  0 bytes/sec
    loss:  0 percent
    total: 415 packets
path UP
  status:  OK
  bind:    11.22.33.44 port 5000
  public:  11.22.33.44 port 5000
  peer:    34.56.78.90 port 4329
  mtu:     1400 bytes
  rtt:     90.592 ms
  rttvar:  7.641 ms
  rate:    fixed
  beat:    100 ms
  tx:
    rate:  1000000 bytes/sec
    loss:  0 percent
    total: 613 packets
  rx:
    rate:  0 bytes/sec
    loss:  0 percent
    total: 1121 packets

And on local server glorytun path displays:

path UP
  status:  OK
  bind:    192.168.42.10 port 5000
  public:  12.34.56.78 port 11358
  peer:    11.22.33.44 port 5000
  mtu:     1400 bytes
  rtt:     79.871 ms
  rttvar:  6.916 ms
  rate:    fixed
  beat:    100 ms
  tx:
    rate:  2136 bytes/sec
    loss:  0 percent
    total: 379 packets
  rx:
    rate:  1000000 bytes/sec
    loss:  0 percent
    total: 437 packets
path UP
  status:  OK
  bind:    192.168.42.11 port 5000
  public:  23.45.67.89 port 16041
  peer:    11.22.33.44 port 5000
  mtu:     1400 bytes
  rtt:     83.677 ms
  rttvar:  4.887 ms
  rate:    fixed
  beat:    100 ms
  tx:
    rate:  2171 bytes/sec
    loss:  0 percent
    total: 376 packets
  rx:
    rate:  1000000 bytes/sec
    loss:  0 percent
    total: 493 packets
path UP
  status:  OK
  bind:    192.168.42.12 port 5000
  public:  34.56.78.90 port 4329
  peer:    11.22.33.44 port 5000
  mtu:     1400 bytes
  rtt:     92.301 ms
  rttvar:  9.818 ms
  rate:    fixed
  beat:    100 ms
  tx:
    rate:  2223 bytes/sec
    loss:  0 percent
    total: 1081 packets
  rx:
    rate:  1000000 bytes/sec
    loss:  0 percent
    total: 575 packets

Cloud server commands to start the tunnel:

#!/bin/bash
gt=/usr/local/bin/glorytun

ip tuntap add tun0 mode tun
ip addr add 10.80.1.1/30 peer 10.80.1.2/30 dev tun0
ip link set tun0 up
$gt bind dev tun0 keyfile /etc/gt.key &
sleep 2.0

$gt show

ip route add 192.168.1.0/24 via 10.80.1.2

Local server commands to start the tunnel:

#!/bin/bash
nCon=3

gt=/usr/local/bin/glorytun

ip tuntap add tun0 mode tun
ip addr add 10.80.1.2/30 peer 10.80.1.1/30 dev tun0
ip link set tun0 up
$gt bind to 11.22.33.44 dev tun0 keyfile /etc/gt.key &
sleep 2

$gt show

for n in `seq 0 1 $((nCon - 1))`; do
  n0=$(printf "%02d" ${n})
  n1=$((n + 10))

  ip addr add 192.168.42.${n1}/24 dev ens7f${n}
  ip link set ens7f${n} up

  ip route add 192.168.42.${n1} dev ens7f${n} scope link table wl${n0}
  ip route add default via 192.168.42.129 dev ens7f${n} table wl${n0}

  ip rule add pref ${n1} from 192.168.42.${n1} table wl${n0}
  ip rule add pref ${n1} to 192.168.42.${n1} table wl${n0}

  $gt path up 192.168.42.${n1} rate tx 2mbit rx 8mbit
  sleep 2
done

ip route repl default via 10.80.1.1 dev tun0

Cloud server firewall:

root@cloud-server:~# iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
root@cloud-server:~# iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-A POSTROUTING ! -d 11.22.33.44/32 -o ens3 -j SNAT --to-source 11.22.33.44
root@cloud-server:~# iptables -t mangle -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT

Local server firewall:

root@local-server:~# iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-A FORWARD -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
root@local-server:~# iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-A POSTROUTING -o ens7f+ -j MASQUERADE
root@local-server:~# iptables -t mangle -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT

Would be very grateful for a hint where I'm wrong here!

angt commented 4 years ago

Hi, If the tunnel says it's OK, chances are that the problem is somewhere else. what happens if you simply do an iperf3 between 10.80.1.1 and 10.80.1.2 ?

legolas108 commented 4 years ago

Thanks much for helping!

After re-installing the local server on a metal box (was VM before) and using inbuilt USB ports (was external hub before) started the tunnel and started to download a big file which again quickly slowed down to a halt. At this point we get the following for the cloud server:

root@cloud-server:~# glorytun path
path UP
  status:  OK
  bind:    11.22.33.44 port 5000
  public:  11.22.33.44 port 5000
  peer:    12.34.56.78 port 11353
  mtu:     1400 bytes
  rtt:     84.598 ms
  rttvar:  4.889 ms
  rate:    fixed
  beat:    100 ms
  tx:
    rate:  1000000 bytes/sec
    loss:  0 percent
    total: 1646 packets
  rx:
    rate:  0 bytes/sec
    loss:  0 percent
    total: 493 packets
path UP
  status:  OK
  bind:    11.22.33.44 port 5000
  public:  11.22.33.44 port 5000
  peer:    23.45.67.89 port 16056
  mtu:     1400 bytes
  rtt:     90.587 ms
  rttvar:  1.539 ms
  rate:    fixed
  beat:    100 ms
  tx:
    rate:  1000000 bytes/sec
    loss:  0 percent
    total: 1598 packets
  rx:
    rate:  0 bytes/sec
    loss:  0 percent
    total: 2189 packets
path UP
  status:  OK
  bind:    11.22.33.44 port 5000
  public:  11.22.33.44 port 5000
  peer:    34.56.78.90 port 4333
  mtu:     1400 bytes
  rtt:     93.156 ms
  rttvar:  5.032 ms
  rate:    fixed
  beat:    100 ms
  tx:
    rate:  1000000 bytes/sec
    loss:  0 percent
    total: 1535 packets
  rx:
    rate:  0 bytes/sec
    loss:  0 percent
    total: 1912 packets

And for the local server:

root@local-server:~# glorytun path
path UP
  status:  OK
  bind:    192.168.42.10 port 5000
  public:  12.34.56.78 port 11353
  peer:    11.22.33.44 port 5000
  mtu:     1400 bytes
  rtt:     83.239 ms
  rttvar:  9.336 ms
  rate:    fixed
  beat:    100 ms
  tx:
    rate:  2202 bytes/sec
    loss:  0 percent
    total: 438 packets
  rx:
    rate:  1000000 bytes/sec
    loss:  0 percent
    total: 1601 packets
path UP
  status:  OK
  bind:    192.168.42.11 port 5000
  public:  23.45.67.89 port 16056
  peer:    11.22.33.44 port 5000
  mtu:     1400 bytes
  rtt:     87.700 ms
  rttvar:  7.947 ms
  rate:    fixed
  beat:    100 ms
  tx:
    rate:  2119 bytes/sec
    loss:  0 percent
    total: 2155 packets
  rx:
    rate:  1000000 bytes/sec
    loss:  0 percent
    total: 1560 packets
path UP
  status:  OK
  bind:    192.168.42.12 port 5000
  public:  34.56.78.90 port 4333
  peer:    11.22.33.44 port 5000
  mtu:     1400 bytes
  rtt:     88.153 ms
  rttvar:  9.438 ms
  rate:    fixed
  beat:    100 ms
  tx:
    rate:  2139 bytes/sec
    loss:  0 percent
    total: 1867 packets
  rx:
    rate:  1000000 bytes/sec
    loss:  0 percent
    total: 1498 packets

Cloud server firewall:

root@cloud-server:~# iptables -S
-P INPUT DROP
-P FORWARD DROP
-P OUTPUT ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 5000 -j ACCEPT
-A INPUT -p udp -m udp --dport 5000 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 5201 -j ACCEPT
-A INPUT -i tun0 -j ACCEPT
-A FORWARD -i tun0 -o ens3 -j ACCEPT
-A FORWARD -i ens3 -o tun0 -m state --state RELATED,ESTABLISHED -j ACCEPT
root@cloud-server:~# iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-A POSTROUTING -s 10.80.2.0/24 -o ens3 -j MASQUERADE

(Tunnel addresses have been adjusted from 10.80.1.* to 10.80.2.*.) Local server firewall:

root@local-server:~# iptables -S
-P INPUT DROP
-P FORWARD DROP
-P OUTPUT ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -i tun0 -j ACCEPT
-A FORWARD -i tun0 -o enp4s0 -j ACCEPT
-A FORWARD -i enp4s0 -o tun0 -j ACCEPT
root@local-server:~# iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-A POSTROUTING -o usb+ -j MASQUERADE

And here iperf3 output for the cloud server:

root@cloud-server:~# iperf3 -s -f K
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.80.2.2, port 35250
[  5] local 11.22.33.44 port 5201 connected to 10.80.2.2 port 35252
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec  0.00 Bytes  0.00 KBytes/sec                  
[  5]   1.00-2.00   sec  0.00 Bytes  0.00 KBytes/sec                  
[  5]   2.00-3.00   sec  1.29 KBytes  1.30 KBytes/sec                  
[  5]   3.00-4.00   sec  0.00 Bytes  0.00 KBytes/sec                  
[  5]   4.00-5.00   sec  1.29 KBytes  1.30 KBytes/sec                  
[  5]   5.00-6.00   sec  1.29 KBytes  1.30 KBytes/sec                  
[  5]   6.00-7.00   sec  0.00 Bytes  0.00 KBytes/sec                  
[  5]   7.00-8.00   sec  1.29 KBytes  1.30 KBytes/sec                  
[  5]   8.00-9.00   sec  1.29 KBytes  1.30 KBytes/sec                  
[  5]   9.00-10.00  sec  0.00 Bytes  0.00 KBytes/sec                  
[  5]  10.00-11.00  sec  1.29 KBytes  1.30 KBytes/sec                  
[  5]  11.00-12.00  sec  1.29 KBytes  1.30 KBytes/sec                  
[  5]  12.00-13.00  sec  1.29 KBytes  1.29 KBytes/sec                  
[  5]  13.00-14.00  sec  2.59 KBytes  2.59 KBytes/sec                  
[  5]  14.00-15.00  sec  1.29 KBytes  1.30 KBytes/sec                  
[  5]  15.00-16.00  sec  2.59 KBytes  2.59 KBytes/sec                  
[  5]  16.00-17.00  sec  1.29 KBytes  1.30 KBytes/sec                  
[  5]  17.00-18.00  sec  1.29 KBytes  1.29 KBytes/sec                  
[  5]  18.00-19.00  sec  1.29 KBytes  1.30 KBytes/sec                  
[  5]  19.00-20.00  sec  3.88 KBytes  3.88 KBytes/sec                  
[  5]  20.00-21.00  sec  0.00 Bytes  0.00 KBytes/sec                  
[  5]  21.00-22.00  sec  1.29 KBytes  1.30 KBytes/sec                  
[  5]  22.00-23.00  sec  2.59 KBytes  2.59 KBytes/sec                  
[  5]  23.00-24.00  sec  1.29 KBytes  1.29 KBytes/sec                  
[  5]  24.00-25.00  sec  0.00 Bytes  0.00 KBytes/sec                  
[  5]  25.00-26.00  sec  1.29 KBytes  1.30 KBytes/sec                  
[  5]  26.00-27.00  sec  1.29 KBytes  1.29 KBytes/sec                  
[  5]  27.00-28.00  sec  1.29 KBytes  1.30 KBytes/sec                  
[  5]  28.00-29.00  sec  0.00 Bytes  0.00 KBytes/sec                  
[  5]  29.00-30.00  sec  1.29 KBytes  1.30 KBytes/sec                  
[  5]  30.00-31.00  sec  1.29 KBytes  1.29 KBytes/sec                  
[  5]  31.00-32.00  sec  1.29 KBytes  1.29 KBytes/sec                  
[  5]  32.00-33.00  sec  0.00 Bytes  0.00 KBytes/sec                  
[  5]  33.00-34.00  sec  2.59 KBytes  2.59 KBytes/sec                  
[  5]  34.00-35.00  sec  1.29 KBytes  1.29 KBytes/sec                  
[  5]  35.00-35.87  sec  0.00 Bytes  0.00 KBytes/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-35.87  sec  0.00 Bytes  0.00 KBytes/sec                  sender
[  5]   0.00-35.87  sec  41.4 KBytes  1.16 KBytes/sec                  receiver

And for the local server:

root@local-server:~# iperf3 -c 11.22.33.44 -f K -d -t 15
send_parameters:
{
        "tcp":  true,
        "omit": 0,
        "time": 15,
        "parallel":     1,
        "len":  131072,
        "client_version":       "3.1.3"
}
Connecting to host 11.22.33.44, port 5201
SO_SNDBUF is 16384
[  4] local 10.80.2.2 port 35252 connected to 11.22.33.44 port 5201
tcpi_snd_cwnd 10 tcpi_snd_mss 1326 tcpi_rtt 2434901
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.48   sec  44.0 KBytes  29.7 KBytes/sec    1   12.9 KBytes       
tcpi_snd_cwnd 10 tcpi_snd_mss 1326 tcpi_rtt 2434901
[  4]   1.48-2.00   sec  0.00 Bytes  0.00 KBytes/sec    0   12.9 KBytes       
tcpi_snd_cwnd 11 tcpi_snd_mss 1326 tcpi_rtt 2307645
[  4]   2.00-3.00   sec  2.59 KBytes  2.59 KBytes/sec    0   14.2 KBytes       
tcpi_snd_cwnd 11 tcpi_snd_mss 1326 tcpi_rtt 2307645
[  4]   3.00-4.00   sec  0.00 Bytes  0.00 KBytes/sec    0   14.2 KBytes       
tcpi_snd_cwnd 12 tcpi_snd_mss 1326 tcpi_rtt 2370298
[  4]   4.00-5.00   sec  2.59 KBytes  2.59 KBytes/sec    0   15.5 KBytes       
tcpi_snd_cwnd 13 tcpi_snd_mss 1326 tcpi_rtt 2625118
[  4]   5.00-6.00   sec  2.59 KBytes  2.59 KBytes/sec    0   16.8 KBytes       
tcpi_snd_cwnd 13 tcpi_snd_mss 1326 tcpi_rtt 2625118
[  4]   6.00-7.00   sec  0.00 Bytes  0.00 KBytes/sec    0   16.8 KBytes       
tcpi_snd_cwnd 14 tcpi_snd_mss 1326 tcpi_rtt 3013097
[  4]   7.00-8.00   sec  2.59 KBytes  2.59 KBytes/sec    0   18.1 KBytes       
tcpi_snd_cwnd 15 tcpi_snd_mss 1326 tcpi_rtt 3558173
[  4]   8.00-9.00   sec  2.59 KBytes  2.59 KBytes/sec    0   19.4 KBytes       
tcpi_snd_cwnd 15 tcpi_snd_mss 1326 tcpi_rtt 3558173
[  4]   9.00-10.00  sec  0.00 Bytes  0.00 KBytes/sec    0   19.4 KBytes       
tcpi_snd_cwnd 16 tcpi_snd_mss 1326 tcpi_rtt 4221864
[  4]  10.00-11.00  sec  2.59 KBytes  2.59 KBytes/sec    0   20.7 KBytes       
tcpi_snd_cwnd 17 tcpi_snd_mss 1326 tcpi_rtt 4954449
[  4]  11.00-12.00  sec  2.59 KBytes  2.59 KBytes/sec    0   22.0 KBytes       
tcpi_snd_cwnd 18 tcpi_snd_mss 1326 tcpi_rtt 5713352
[  4]  12.00-13.00  sec  2.59 KBytes  2.59 KBytes/sec    0   23.3 KBytes       
tcpi_snd_cwnd 20 tcpi_snd_mss 1326 tcpi_rtt 7207204
[  4]  13.00-14.00  sec  5.18 KBytes  5.18 KBytes/sec    0   25.9 KBytes       
tcpi_snd_cwnd 21 tcpi_snd_mss 1326 tcpi_rtt 7812660
send_results
{
        "cpu_util_total":       20.223630,
        "cpu_util_user":        6.617533,
        "cpu_util_system":      13.606092,
        "sender_has_retransmits":       1,
        "streams":      [{
                        "id":   1,
                        "bytes":        74256,
                        "retransmits":  1,
                        "jitter":       0,
                        "errors":       0,
                        "packets":      0
                }]
}
get_results
{
        "cpu_util_total":       0.005333,
        "cpu_util_user":        0,
        "cpu_util_system":      0.005338,
        "sender_has_retransmits":       -1,
        "streams":      [{
                        "id":   1,
                        "bytes":        42432,
                        "retransmits":  -1,
                        "jitter":       0,
                        "errors":       0,
                        "packets":      0
                }]
}
[  4]  14.00-15.00  sec  2.59 KBytes  2.59 KBytes/sec    0   27.2 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-15.00  sec  72.5 KBytes  4.83 KBytes/sec    1             sender
[  4]   0.00-15.00  sec  41.4 KBytes  2.76 KBytes/sec                  receiver

iperf Done.

For comparison, a single 4G line usually downloads at about 1.3 MBytes/s and upload is not much less.

image

angt commented 4 years ago

Hello, Your path does not seem to be configured as in your script. As rate is fixed this value should not change..

angt commented 4 years ago

Ok, my fault :/

angt commented 4 years ago

Should be fixed with https://github.com/angt/mud/commit/2f966bb3652b44c13a40b88d20c7bbcafad065f7 on the last release -> https://github.com/angt/glorytun/releases/tag/v0.3.2

legolas108 commented 4 years ago

Yes, definitely works now, no more slowing down to a halt. Thanks so much for prompt action!

But, unfortunately there's a "but": aggregated speed seems never to get close to the sum of what individual lines contribute. As individual speed varies quite a bit on our 4G it's difficult to be more precise, but roughly expressed, aggregated speed can be 20% more than any individual speed, but seems to be sometimes even less than that.

Another iperf3 for cloud server with 3 lines aggregated:

root@cloud-server:~# iperf3 -s -f K
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.80.2.2, port 36630
[  5] local 11.22.33.44 port 5201 connected to 10.80.2.2 port 36632
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   467 KBytes   467 KBytes/sec                  
[  5]   1.00-2.00   sec   712 KBytes   712 KBytes/sec                  
[  5]   2.00-3.00   sec   778 KBytes   778 KBytes/sec                  
[  5]   3.00-4.00   sec   882 KBytes   882 KBytes/sec                  
[  5]   4.00-5.00   sec   963 KBytes   963 KBytes/sec                  
[  5]   5.00-6.00   sec   961 KBytes   961 KBytes/sec                  
[  5]   6.00-7.00   sec   997 KBytes   997 KBytes/sec                  
[  5]   7.00-8.00   sec   716 KBytes   716 KBytes/sec                  
[  5]   8.00-9.00   sec   679 KBytes   679 KBytes/sec                  
[  5]   9.00-10.00  sec   783 KBytes   783 KBytes/sec                  
[  5]  10.00-11.00  sec   903 KBytes   903 KBytes/sec                  
[  5]  11.00-12.00  sec   791 KBytes   791 KBytes/sec                  
[  5]  12.00-13.00  sec   895 KBytes   895 KBytes/sec                  
[  5]  13.00-14.00  sec   817 KBytes   817 KBytes/sec                  
[  5]  14.00-15.00  sec   813 KBytes   813 KBytes/sec                  
[  5]  15.00-15.21  sec   180 KBytes   859 KBytes/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-15.21  sec  0.00 Bytes  0.00 KBytes/sec                  sender
[  5]   0.00-15.21  sec  12.0 MBytes   811 KBytes/sec                  receiver

And the corresponding iperf3 for local server:

root@local-server:~# iperf3 -c 11.22.33.44 -f K -d -t 15
send_parameters:
{
        "tcp":  true,
        "omit": 0,
        "time": 15,
        "parallel":     1,
        "len":  131072,
        "client_version":       "3.1.3"
}
Connecting to host 11.22.33.44, port 5201
SO_SNDBUF is 16384
[  4] local 10.80.2.2 port 36632 connected to 11.22.33.44 port 5201
tcpi_snd_cwnd 50 tcpi_snd_mss 1326 tcpi_rtt 93887
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   684 KBytes   684 KBytes/sec    5   64.7 KBytes       
tcpi_snd_cwnd 55 tcpi_snd_mss 1326 tcpi_rtt 103009
[  4]   1.00-2.00   sec   755 KBytes   755 KBytes/sec    1   71.2 KBytes       
tcpi_snd_cwnd 60 tcpi_snd_mss 1326 tcpi_rtt 95804
[  4]   2.00-3.00   sec   754 KBytes   754 KBytes/sec    5   77.7 KBytes       
tcpi_snd_cwnd 66 tcpi_snd_mss 1326 tcpi_rtt 90040
[  4]   3.00-4.00   sec   936 KBytes   936 KBytes/sec    0   85.5 KBytes       
tcpi_snd_cwnd 71 tcpi_snd_mss 1326 tcpi_rtt 87286
[  4]   4.00-5.00   sec   943 KBytes   943 KBytes/sec    2   91.9 KBytes       
tcpi_snd_cwnd 76 tcpi_snd_mss 1326 tcpi_rtt 97065
[  4]   5.00-6.00   sec  1.00 MBytes  1027 KBytes/sec    0   98.4 KBytes       
tcpi_snd_cwnd 82 tcpi_snd_mss 1326 tcpi_rtt 115034
[  4]   6.00-7.00   sec   996 KBytes   996 KBytes/sec    0    106 KBytes       
tcpi_snd_cwnd 79 tcpi_snd_mss 1326 tcpi_rtt 125330
[  4]   7.00-8.00   sec   767 KBytes   767 KBytes/sec  100    102 KBytes       
tcpi_snd_cwnd 83 tcpi_snd_mss 1326 tcpi_rtt 129313
[  4]   8.00-9.00   sec   698 KBytes   698 KBytes/sec  113    107 KBytes       
tcpi_snd_cwnd 83 tcpi_snd_mss 1326 tcpi_rtt 129273
[  4]   9.00-10.00  sec   799 KBytes   799 KBytes/sec   77    107 KBytes       
tcpi_snd_cwnd 83 tcpi_snd_mss 1326 tcpi_rtt 115223
[  4]  10.00-11.00  sec   912 KBytes   912 KBytes/sec   37    107 KBytes       
tcpi_snd_cwnd 84 tcpi_snd_mss 1326 tcpi_rtt 112808
[  4]  11.00-12.00  sec   774 KBytes   774 KBytes/sec   62    109 KBytes       
tcpi_snd_cwnd 85 tcpi_snd_mss 1326 tcpi_rtt 141592
[  4]  12.00-13.00  sec   908 KBytes   908 KBytes/sec   21    110 KBytes       
tcpi_snd_cwnd 85 tcpi_snd_mss 1326 tcpi_rtt 138750
[  4]  13.00-14.00  sec   825 KBytes   825 KBytes/sec  100    110 KBytes       
tcpi_snd_cwnd 79 tcpi_snd_mss 1326 tcpi_rtt 95439
send_results
{
        "cpu_util_total":       1.215329,
        "cpu_util_user":        0.500171,
        "cpu_util_system":      0.715151,
        "sender_has_retransmits":       1,
        "streams":      [{
                        "id":   1,
                        "bytes":        12923196,
                        "retransmits":  589,
                        "jitter":       0,
                        "errors":       0,
                        "packets":      0
                }]
}
get_results
{
        "cpu_util_total":       0.375594,
        "cpu_util_user":        0.063670,
        "cpu_util_system":      0.311923,
        "sender_has_retransmits":       -1,
        "streams":      [{
                        "id":   1,
                        "bytes":        12634128,
                        "retransmits":  -1,
                        "jitter":       0,
                        "errors":       0,
                        "packets":      0
                }]
}
[  4]  14.00-15.00  sec   844 KBytes   844 KBytes/sec   66    102 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-15.00  sec  12.3 MBytes   841 KBytes/sec  589             sender
[  4]   0.00-15.00  sec  12.0 MBytes   823 KBytes/sec                  receiver

iperf Done.

Following the iperf3 output when aggregating only one single line. First for the cloud server:

root@cloud-server:~# iperf3 -s -f K
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.80.2.2, port 36638
[  5] local 11.22.33.44 port 5201 connected to 10.80.2.2 port 36640
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   576 KBytes   576 KBytes/sec                  
[  5]   1.00-2.00   sec  1005 KBytes  1005 KBytes/sec                  
[  5]   2.00-3.00   sec   982 KBytes   982 KBytes/sec                  
[  5]   3.00-4.00   sec  1.01 MBytes  1033 KBytes/sec                  
[  5]   4.00-5.00   sec  1.27 MBytes  1303 KBytes/sec                  
[  5]   5.00-6.00   sec  1.17 MBytes  1202 KBytes/sec                  
[  5]   6.00-7.00   sec  1022 KBytes  1021 KBytes/sec                  
[  5]   7.00-8.00   sec  1.36 MBytes  1394 KBytes/sec                  
[  5]   8.00-9.00   sec  1.17 MBytes  1195 KBytes/sec                  
[  5]   9.00-10.00  sec  1.15 MBytes  1178 KBytes/sec                  
[  5]  10.00-11.00  sec   936 KBytes   936 KBytes/sec                  
[  5]  11.00-12.00  sec  1.12 MBytes  1145 KBytes/sec                  
[  5]  12.00-13.00  sec  1.06 MBytes  1090 KBytes/sec                  
[  5]  13.00-14.00  sec  1.08 MBytes  1103 KBytes/sec                  
[  5]  14.00-15.00  sec  1.11 MBytes  1141 KBytes/sec                  
[  5]  15.00-15.30  sec   341 KBytes  1137 KBytes/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-15.30  sec  0.00 Bytes  0.00 KBytes/sec                  sender
[  5]   0.00-15.30  sec  16.3 MBytes  1088 KBytes/sec                  receiver

And for the local server:

root@local-server:~# iperf3 -c 11.22.33.44 -f K -d -t 15
send_parameters:
{
        "tcp":  true,
        "omit": 0,
        "time": 15,
        "parallel":     1,
        "len":  131072,
        "client_version":       "3.1.3"
}
Connecting to host 11.22.33.44, port 5201
SO_SNDBUF is 16384
[  4] local 10.80.2.2 port 36640 connected to 11.22.33.44 port 5201
tcpi_snd_cwnd 69 tcpi_snd_mss 1326 tcpi_rtt 111942
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   862 KBytes   862 KBytes/sec    0   89.3 KBytes       
tcpi_snd_cwnd 108 tcpi_snd_mss 1326 tcpi_rtt 123751
[  4]   1.00-2.00   sec  1.12 MBytes  1142 KBytes/sec    0    140 KBytes       
tcpi_snd_cwnd 142 tcpi_snd_mss 1326 tcpi_rtt 162038
[  4]   2.00-3.00   sec  1.05 MBytes  1079 KBytes/sec    0    184 KBytes       
tcpi_snd_cwnd 185 tcpi_snd_mss 1326 tcpi_rtt 181860
[  4]   3.00-4.00   sec  1.24 MBytes  1269 KBytes/sec    0    240 KBytes       
tcpi_snd_cwnd 236 tcpi_snd_mss 1326 tcpi_rtt 246222
[  4]   4.00-5.00   sec  1.49 MBytes  1523 KBytes/sec    0    306 KBytes       
tcpi_snd_cwnd 282 tcpi_snd_mss 1326 tcpi_rtt 297978
[  4]   5.00-6.00   sec  1.36 MBytes  1396 KBytes/sec    0    365 KBytes       
tcpi_snd_cwnd 263 tcpi_snd_mss 1326 tcpi_rtt 322709
[  4]   6.00-7.00   sec  1.05 MBytes  1078 KBytes/sec   11    341 KBytes       
tcpi_snd_cwnd 245 tcpi_snd_mss 1326 tcpi_rtt 305917
[  4]   7.00-8.00   sec  1.43 MBytes  1459 KBytes/sec    7    317 KBytes       
tcpi_snd_cwnd 278 tcpi_snd_mss 1326 tcpi_rtt 289591
[  4]   8.00-9.00   sec  1.18 MBytes  1206 KBytes/sec    0    360 KBytes       
tcpi_snd_cwnd 300 tcpi_snd_mss 1326 tcpi_rtt 308366
[  4]   9.00-10.00  sec  1.12 MBytes  1142 KBytes/sec    0    388 KBytes       
tcpi_snd_cwnd 270 tcpi_snd_mss 1326 tcpi_rtt 379042
[  4]  10.00-11.00  sec   952 KBytes   951 KBytes/sec    8    350 KBytes       
tcpi_snd_cwnd 228 tcpi_snd_mss 1326 tcpi_rtt 282126
[  4]  11.00-12.00  sec  1.18 MBytes  1206 KBytes/sec   10    295 KBytes       
tcpi_snd_cwnd 247 tcpi_snd_mss 1326 tcpi_rtt 282679
[  4]  12.00-13.00  sec  1.05 MBytes  1079 KBytes/sec    0    320 KBytes       
tcpi_snd_cwnd 258 tcpi_snd_mss 1326 tcpi_rtt 295655
[  4]  13.00-14.00  sec  1.05 MBytes  1079 KBytes/sec    0    334 KBytes       
tcpi_snd_cwnd 263 tcpi_snd_mss 1326 tcpi_rtt 270570
send_results
{
        "cpu_util_total":       1.083733,
        "cpu_util_user":        0.277910,
        "cpu_util_system":      0.805830,
        "sender_has_retransmits":       1,
        "streams":      [{
                        "id":   1,
                        "bytes":        18036252,
                        "retransmits":  36,
                        "jitter":       0,
                        "errors":       0,
                        "packets":      0
                }]
}
get_results
{
        "cpu_util_total":       0.693273,
        "cpu_util_user":        0.226227,
        "cpu_util_system":      0.467072,
        "sender_has_retransmits":       -1,
        "streams":      [{
                        "id":   1,
                        "bytes":        17044404,
                        "retransmits":  -1,
                        "jitter":       0,
                        "errors":       0,
                        "packets":      0
                }]
}
[  4]  14.00-15.00  sec  1.12 MBytes  1142 KBytes/sec    0    341 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-15.00  sec  17.2 MBytes  1174 KBytes/sec   36             sender
[  4]   0.00-15.00  sec  16.3 MBytes  1110 KBytes/sec                  receiver

iperf Done.

Hope this helps.

legolas108 commented 4 years ago

And here the latest tunnel creation scripts, first for cloud server:

#!/bin/bash
gt=/usr/local/bin/glorytun

ip tuntap add tun0 mode tun
ip addr add 10.80.2.1/30 peer 10.80.2.2/30 dev tun0
ip link set tun0 up
$gt bind dev tun0 keyfile /etc/gt.key chacha &
sleep 2.0

$gt show

ip route add 192.168.1.0/24 via 10.80.2.2

And for local server:

#!/bin/bash
nCon=3

gt=/usr/local/bin/glorytun

ip tuntap add tun0 mode tun
ip addr add 10.80.2.2/30 peer 10.80.2.1/30 dev tun0
ip link set tun0 up
$gt bind to 11.22.33.44 dev tun0 keyfile /etc/gt.key chacha &
sleep 2

$gt show

for n in `seq 0 1 $((nCon - 1))`; do
  n0=$(printf "%02d" ${n})
  n1=$((n + 10))

  ip addr add 192.168.42.${n1}/24 dev usb${n0}
  ip link set usb${n0} up

  ip route add 192.168.42.${n1} dev usb${n0} scope link table wl${n0}
  ip route add default via 192.168.42.129 dev usb${n0} table wl${n0}

  ip rule add pref ${n1} from 192.168.42.${n1} table wl${n0}

  $gt path up 192.168.42.${n1} rate tx 16mbit rx 24mbit
  sleep 2
done

ip route repl default via 10.80.2.1 dev tun0

Firewall is now done by FireHOL, but that shouldn't matter for this issue any longer.

angt commented 4 years ago

Nice, I have good and bad feedback about 4G aggregation.. In general the use of the qdisc cake with a small txqlen on tun0 helps.