Closed echolimazulu closed 4 years ago
Hi, I see a very high RTT on your path, glorytun doesn't support >500ms path by default. This may be the cause of the wrong estimation of the MTU. I'll do some tests on my side to check but in the meantime you can maybe change the beat of the path at 500ms.
Thank you for reply,
If i set Cipher aegis256 and use fixed rates is 35mbit for RX/TX:
glorytun show client tun0: pid: 18553 bind: 0.0.0.0 port 5000 peer: 192.168.254.5 port 5000 mtu: 1357 cipher: aegis256
glorytun path path UP status: OK bind: 192.168.254.6 port 5000 public: 192.168.254.6 port 5000 peer: 192.168.254.5 port 5000 mtu: 1379 bytes rtt: 30.958 ms rttvar: 11.406 ms rate: fixed losslim: 100 beat: 100 ms tx: rate: 4375000 bytes/sec loss: 0 percent total: 45 packets rx: rate: 4375000 bytes/sec loss: 0 percent total: 15 packets path UP status: OK bind: 192.168.254.10 port 5000 public: 192.168.254.10 port 5000 peer: 192.168.254.5 port 5000 mtu: 1379 bytes rtt: 28.932 ms rttvar: 9.240 ms rate: fixed losslim: 100 beat: 100 ms tx: rate: 4375000 bytes/sec loss: 0 percent total: 45 packets rx: rate: 4375000 bytes/sec loss: 0 percent total: 15 packets
ifconfig tun0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1357 inet 10.0.1.2 netmask 255.255.255.255 destination 10.0.1.1 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens224: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.254.6 netmask 255.255.255.252 broadcast 192.168.254.7 ether 00:0c:29:84:9d:52 txqueuelen 1000 (Ethernet) RX packets 245751633 bytes 231318617728 (215.4 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 165587164 bytes 35605805040 (33.1 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens256: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.254.10 netmask 255.255.255.252 broadcast 192.168.254.11 ether 00:0c:29:84:9d:5c txqueuelen 1000 (Ethernet) RX packets 95366442 bytes 90785044639 (84.5 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 64375486 bytes 14254032682 (13.2 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
This looks more legit :) You can check/confirm the detected MTU with the command:
ping -M do -s <SIZE> -I <SRC> <DEST>
For some reason 1450 and 1472 are not available to pass through my network interface (MTU1500 - SRCs 254.6, 254.10). Is this normal behavior? These interfaces are based on NAT (masquerade) for the LTE modem.
ping -M do -s 1357 -I 192.168.254.6 192.168.254.5 PING 192.168.254.5 (192.168.254.5) from 192.168.254.6 : 1357(1385) bytes of data. 1365 bytes from 192.168.254.5: icmp_seq=1 ttl=64 time=49.4 ms 1365 bytes from 192.168.254.5: icmp_seq=2 ttl=64 time=27.8 ms 1365 bytes from 192.168.254.5: icmp_seq=3 ttl=64 time=33.6 ms ^C --- 192.168.254.5 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 5ms rtt min/avg/max/mdev = 27.793/36.932/49.415/9.140 ms
ping -M do -s 1379 -I 192.168.254.6 192.168.254.5 PING 192.168.254.5 (192.168.254.5) from 192.168.254.6 : 1379(1407) bytes of data. 1387 bytes from 192.168.254.5: icmp_seq=1 ttl=64 time=30.4 ms 1387 bytes from 192.168.254.5: icmp_seq=2 ttl=64 time=38.4 ms 1387 bytes from 192.168.254.5: icmp_seq=3 ttl=64 time=45.1 ms ^C --- 192.168.254.5 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 5ms rtt min/avg/max/mdev = 30.428/37.972/45.058/5.981 ms
ping -M do -s 1450 -I 192.168.254.6 192.168.254.5 PING 192.168.254.5 (192.168.254.5) from 192.168.254.6 : 1450(1478) bytes of data. ^C --- 192.168.254.5 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 40ms
ping -M do -s 1472 -I 192.168.254.6 192.168.254.5 PING 192.168.254.5 (192.168.254.5) from 192.168.254.6 : 1472(1500) bytes of data. ^C --- 192.168.254.5 ping statistics --- 5 packets transmitted, 0 received, 100% packet loss, time 86ms
No idea about your network setup, MTU is hard to guess and that's exactly why glorytun do it for you :)
Thank you and thank Glorytun for help in that,
With these MTU settings, the throughput is at 28Mbit/s when using Glorytun and two uplinks. At the same time, the nominal throughput for each of the interfaces is 28-70Mbit/s. The latency ranges from 30-60ms. What could be the reason for such a low level of performance through Glorytun? Perhaps it makes sense to make additional settings, for example, in the queue size and MTU fragmentation does not significantly affect the final results? After Glorytun I am using a high performance tunnel over UDP.
glorytun bench cipher: aegis256
size min mean max 20 869 Mbps 887 Mbps 897 Mbps 150 4681 Mbps 4703 Mbps 4712 Mbps 280 6935 Mbps 6950 Mbps 6960 Mbps 410 8070 Mbps 8138 Mbps 8151 Mbps 540 9113 Mbps 9160 Mbps 9187 Mbps 670 9594 Mbps 9829 Mbps 9867 Mbps 800 10477 Mbps 10569 Mbps 10590 Mbps 930 10693 Mbps 10738 Mbps 10756 Mbps 1060 10871 Mbps 11075 Mbps 11134 Mbps 1190 11376 Mbps 11443 Mbps 11476 Mbps 1320 11547 Mbps 11672 Mbps 11715 Mbps 1450 11901 Mbps 11944 Mbps 11971 Mbps
Can you do iperf3 -u -b 100M -B <client tun0 ip> <server tun0 ip>
?
iperf3 -u -b 100M -B 10.0.1.2 -c 10.0.1.1 Connecting to host 10.0.1.1, port 5201 [ 5] local 10.0.1.2 port 34215 connected to 10.0.1.1 port 5201 [ ID] Interval Transfer Bitrate Total Datagrams [ 5] 0.00-1.00 sec 11.9 MBytes 99.9 Mbits/sec 9571 [ 5] 1.00-2.00 sec 11.9 MBytes 100 Mbits/sec 9579 [ 5] 2.00-3.00 sec 11.9 MBytes 100 Mbits/sec 9578 [ 5] 3.00-4.00 sec 11.9 MBytes 100 Mbits/sec 9579 [ 5] 4.00-5.00 sec 11.9 MBytes 100 Mbits/sec 9578 [ 5] 5.00-6.00 sec 11.9 MBytes 100 Mbits/sec 9579 [ 5] 6.00-7.00 sec 11.9 MBytes 100 Mbits/sec 9578 [ 5] 7.00-8.00 sec 11.9 MBytes 100 Mbits/sec 9579 [ 5] 8.00-9.00 sec 11.9 MBytes 100 Mbits/sec 9578 [ 5] 9.00-10.00 sec 11.9 MBytes 100 Mbits/sec 9579
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams [ 5] 0.00-10.00 sec 119 MBytes 100 Mbits/sec 0.000 ms 0/95778 (0%) sender [ 5] 0.00-10.35 sec 72.6 MBytes 58.8 Mbits/sec 0.800 ms 37474/95774 (39%) receiver
iperf Done. iperf3 -u -b 100M -B 10.0.1.2 -c 10.0.1.1 Connecting to host 10.0.1.1, port 5201 [ 5] local 10.0.1.2 port 48657 connected to 10.0.1.1 port 5201 [ ID] Interval Transfer Bitrate Total Datagrams [ 5] 0.00-1.00 sec 11.9 MBytes 99.9 Mbits/sec 9571 [ 5] 1.00-2.00 sec 11.9 MBytes 100 Mbits/sec 9579 [ 5] 2.00-3.00 sec 11.9 MBytes 100 Mbits/sec 9578 [ 5] 3.00-4.00 sec 11.9 MBytes 100 Mbits/sec 9582 [ 5] 4.00-5.00 sec 11.9 MBytes 100 Mbits/sec 9575 [ 5] 5.00-6.00 sec 11.9 MBytes 100 Mbits/sec 9579 [ 5] 6.00-7.00 sec 11.9 MBytes 100 Mbits/sec 9578 [ 5] 7.00-8.00 sec 11.9 MBytes 100 Mbits/sec 9579 [ 5] 8.00-9.00 sec 11.9 MBytes 100 Mbits/sec 9578 [ 5] 9.00-10.00 sec 11.9 MBytes 100 Mbits/sec 9579
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams [ 5] 0.00-10.00 sec 119 MBytes 100 Mbits/sec 0.000 ms 0/95778 (0%) sender [ 5] 0.00-10.35 sec 70.9 MBytes 57.5 Mbits/sec 0.761 ms 38794/95764 (41%) receiver
iperf Done. iperf3 -u -b 100M -B 10.0.1.2 -c 10.0.1.1 Connecting to host 10.0.1.1, port 5201 [ 5] local 10.0.1.2 port 37972 connected to 10.0.1.1 port 5201 [ ID] Interval Transfer Bitrate Total Datagrams [ 5] 0.00-1.00 sec 11.9 MBytes 99.9 Mbits/sec 9572 [ 5] 1.00-2.00 sec 11.9 MBytes 100 Mbits/sec 9578 [ 5] 2.00-3.00 sec 11.9 MBytes 100 Mbits/sec 9578 [ 5] 3.00-4.00 sec 11.9 MBytes 100 Mbits/sec 9579 [ 5] 4.00-5.00 sec 11.9 MBytes 100 Mbits/sec 9578 [ 5] 5.00-6.00 sec 11.9 MBytes 100 Mbits/sec 9579 [ 5] 6.00-7.00 sec 11.9 MBytes 100 Mbits/sec 9579 [ 5] 7.00-8.00 sec 11.9 MBytes 100 Mbits/sec 9578 [ 5] 8.00-9.00 sec 11.9 MBytes 100 Mbits/sec 9578 [ 5] 9.00-10.00 sec 11.9 MBytes 100 Mbits/sec 9579
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams [ 5] 0.00-10.00 sec 119 MBytes 100 Mbits/sec 0.000 ms 0/95778 (0%) sender [ 5] 0.00-10.32 sec 70.1 MBytes 57.0 Mbits/sec 0.469 ms 39435/95778 (41%) receiver
So i guess you have ~30Mbps on each path and glorytun gives you ~60 on UDP. It looks good so far.
You can now test TCP (just iperf3 -B <IP> <IP>
) to see if you get the same result or less.
I set a fixed 30mbit for rx and tx for each path. rtt for paths 40-45ms Is it normal for TCP throughput to go from 5mbit to 60mbit continuously? When tested using MPTCP, iperf3 (tcp) produced up to 50-90mbit/s on two paths.
iperf3 -u -b 60M -B 10.0.1.2 -c 10.0.1.1 Connecting to host 10.0.1.1, port 5201 [ 5] local 10.0.1.2 port 59769 connected to 10.0.1.1 port 5201 [ ID] Interval Transfer Bitrate Total Datagrams [ 5] 0.00-1.00 sec 7.15 MBytes 60.0 Mbits/sec 5743 [ 5] 1.00-2.00 sec 7.15 MBytes 60.0 Mbits/sec 5747 [ 5] 2.00-3.00 sec 7.15 MBytes 60.0 Mbits/sec 5747 [ 5] 3.00-4.00 sec 7.15 MBytes 60.0 Mbits/sec 5747 [ 5] 4.00-5.00 sec 7.15 MBytes 60.0 Mbits/sec 5748 [ 5] 5.00-6.00 sec 7.15 MBytes 60.0 Mbits/sec 5746 [ 5] 6.00-7.00 sec 7.15 MBytes 60.0 Mbits/sec 5748 [ 5] 7.00-8.00 sec 7.15 MBytes 60.0 Mbits/sec 5747 [ 5] 8.00-9.00 sec 7.15 MBytes 60.0 Mbits/sec 5747 [ 5] 9.00-10.00 sec 7.15 MBytes 60.0 Mbits/sec 5747
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams [ 5] 0.00-10.00 sec 71.5 MBytes 60.0 Mbits/sec 0.000 ms 0/57467 (0%) sender [ 5] 0.00-10.32 sec 67.7 MBytes 55.0 Mbits/sec 0.521 ms 2982/57385 (5.2%) receiver
iperf Done. iperf3 -B 10.0.1.2 -c 10.0.1.1 -t 50 Connecting to host 10.0.1.1, port 5201 [ 5] local 10.0.1.2 port 42575 connected to 10.0.1.1 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 472 KBytes 3.86 Mbits/sec 6 22.9 KBytes [ 5] 1.00-2.00 sec 777 KBytes 6.37 Mbits/sec 1 38.2 KBytes [ 5] 2.00-3.00 sec 1.26 MBytes 10.6 Mbits/sec 0 56.1 KBytes [ 5] 3.00-4.00 sec 1.58 MBytes 13.3 Mbits/sec 0 73.9 KBytes [ 5] 4.00-5.00 sec 2.19 MBytes 18.4 Mbits/sec 0 90.5 KBytes [ 5] 5.00-6.00 sec 1.78 MBytes 14.9 Mbits/sec 60 54.8 KBytes [ 5] 6.00-7.00 sec 1.65 MBytes 13.8 Mbits/sec 0 79.0 KBytes [ 5] 7.00-8.00 sec 2.01 MBytes 16.9 Mbits/sec 0 93.0 KBytes [ 5] 8.00-9.00 sec 2.20 MBytes 18.4 Mbits/sec 5 105 KBytes [ 5] 9.00-10.00 sec 2.13 MBytes 17.9 Mbits/sec 11 87.9 KBytes [ 5] 10.00-11.00 sec 2.26 MBytes 18.9 Mbits/sec 0 102 KBytes [ 5] 11.00-12.00 sec 2.50 MBytes 21.0 Mbits/sec 0 117 KBytes [ 5] 12.00-13.00 sec 2.74 MBytes 23.0 Mbits/sec 0 133 KBytes [ 5] 13.00-14.00 sec 3.35 MBytes 28.1 Mbits/sec 0 149 KBytes [ 5] 14.00-15.00 sec 3.78 MBytes 31.7 Mbits/sec 0 164 KBytes [ 5] 15.00-16.00 sec 3.72 MBytes 31.2 Mbits/sec 0 180 KBytes [ 5] 16.00-17.00 sec 3.96 MBytes 33.3 Mbits/sec 0 195 KBytes [ 5] 17.00-18.00 sec 4.27 MBytes 35.8 Mbits/sec 0 209 KBytes [ 5] 18.00-19.00 sec 4.88 MBytes 40.7 Mbits/sec 0 224 KBytes [ 5] 19.00-20.00 sec 5.00 MBytes 42.2 Mbits/sec 14 237 KBytes [ 5] 20.00-21.00 sec 5.24 MBytes 44.0 Mbits/sec 11 251 KBytes [ 5] 21.00-22.00 sec 4.94 MBytes 41.4 Mbits/sec 0 265 KBytes [ 5] 22.00-23.00 sec 5.49 MBytes 46.0 Mbits/sec 0 279 KBytes [ 5] 23.00-24.00 sec 5.98 MBytes 50.1 Mbits/sec 0 293 KBytes [ 5] 24.00-25.00 sec 5.67 MBytes 47.6 Mbits/sec 0 306 KBytes [ 5] 25.00-26.00 sec 5.79 MBytes 48.6 Mbits/sec 0 335 KBytes [ 5] 26.00-27.00 sec 6.40 MBytes 53.7 Mbits/sec 0 394 KBytes [ 5] 27.00-28.00 sec 6.34 MBytes 53.2 Mbits/sec 88 423 KBytes [ 5] 28.00-29.00 sec 5.49 MBytes 46.0 Mbits/sec 152 428 KBytes [ 5] 29.00-30.00 sec 6.10 MBytes 51.2 Mbits/sec 117 431 KBytes [ 5] 30.00-31.00 sec 6.10 MBytes 51.2 Mbits/sec 205 436 KBytes [ 5] 31.00-32.00 sec 6.04 MBytes 50.6 Mbits/sec 68 442 KBytes [ 5] 32.00-33.00 sec 6.59 MBytes 55.3 Mbits/sec 58 450 KBytes [ 5] 33.00-34.00 sec 4.94 MBytes 41.4 Mbits/sec 542 398 KBytes [ 5] 34.00-35.00 sec 6.10 MBytes 51.2 Mbits/sec 311 451 KBytes [ 5] 35.00-36.00 sec 6.16 MBytes 51.7 Mbits/sec 150 455 KBytes [ 5] 36.00-37.00 sec 6.10 MBytes 51.2 Mbits/sec 105 459 KBytes [ 5] 37.00-38.00 sec 6.28 MBytes 52.7 Mbits/sec 0 469 KBytes [ 5] 38.00-39.00 sec 5.79 MBytes 48.6 Mbits/sec 384 363 KBytes [ 5] 39.00-40.00 sec 6.53 MBytes 54.7 Mbits/sec 0 409 KBytes [ 5] 40.00-41.00 sec 5.79 MBytes 48.6 Mbits/sec 84 417 KBytes [ 5] 41.00-42.00 sec 6.28 MBytes 52.7 Mbits/sec 43 424 KBytes [ 5] 42.00-43.00 sec 6.59 MBytes 55.3 Mbits/sec 32 433 KBytes [ 5] 43.00-44.00 sec 6.53 MBytes 54.7 Mbits/sec 8 442 KBytes [ 5] 44.00-45.00 sec 6.53 MBytes 54.7 Mbits/sec 0 452 KBytes [ 5] 45.00-46.00 sec 6.46 MBytes 54.2 Mbits/sec 46 459 KBytes [ 5] 46.00-47.00 sec 6.46 MBytes 54.2 Mbits/sec 6 466 KBytes [ 5] 47.00-48.00 sec 6.59 MBytes 55.2 Mbits/sec 0 477 KBytes [ 5] 48.00-49.00 sec 6.10 MBytes 51.2 Mbits/sec 25 422 KBytes [ 5] 49.00-50.00 sec 6.71 MBytes 56.3 Mbits/sec 6 493 KBytes
[ ID] Interval Transfer Bitrate Retr [ 5] 0.00-50.00 sec 235 MBytes 39.4 Mbits/sec 2538 sender [ 5] 0.00-50.08 sec 234 MBytes 39.2 Mbits/sec receiver
No it's not normal. When doing the iperf, keep an eye on glorytun path to see if the RTT goes up and if it does, lower tx/rx again. Also, to fix TCP I generally recommand to use cake both sides:
tc qdisc replace dev tun0 root cake
ip link set tun0 txqlen 100
It's not magical either, but it usually helps a lot.
It's looking bad at the beginning, TCP needs some time to understand how it can use the glorytun link.
At the end you have a not so bad working aggregation.
You can also try iperf3 [...] --congestion bbr
to see if it's help.
If it's better you can ask your system to use it by default.
Currently I am using "cubic" on the client and on the server.
sudo tc qdisc replace dev tun0 root cake Error: Specified qdisc not found.
Hmm, this where OverTheBox shines, It will do all this stuff for you :)
--congestion bbr + ip link set tun0 txqlen 100 = fixed TCP
Result: iperf3 -B 10.0.1.2 -c 10.0.1.1 Connecting to host 10.0.1.1, port 5201 [ 5] local 10.0.1.2 port 39089 connected to 10.0.1.1 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 7.49 MBytes 62.8 Mbits/sec 571 398 KBytes [ 5] 1.00-2.00 sec 5.00 MBytes 41.9 Mbits/sec 996 451 KBytes [ 5] 2.00-3.00 sec 6.25 MBytes 52.4 Mbits/sec 26 390 KBytes [ 5] 3.00-4.00 sec 7.50 MBytes 62.9 Mbits/sec 6 428 KBytes [ 5] 4.00-5.00 sec 6.25 MBytes 52.4 Mbits/sec 0 413 KBytes [ 5] 5.00-6.00 sec 6.25 MBytes 52.4 Mbits/sec 27 418 KBytes [ 5] 6.00-7.00 sec 6.25 MBytes 52.4 Mbits/sec 0 405 KBytes [ 5] 7.00-8.00 sec 6.25 MBytes 52.4 Mbits/sec 0 380 KBytes [ 5] 8.00-9.00 sec 6.25 MBytes 52.4 Mbits/sec 0 451 KBytes [ 5] 9.00-10.00 sec 6.25 MBytes 52.4 Mbits/sec 3 377 KBytes
[ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 63.7 MBytes 53.5 Mbits/sec 1629 sender [ 5] 0.00-10.06 sec 60.8 MBytes 50.7 Mbits/sec receiver
iperf Done.
Nice, I'm closing the issue as there was no MTU issue and TCP link aggregation works :)
Hello Adrien,
I have problems with the MTU values that Glorytun sets automatically when the path is raised. My network interfaces are using MTU 1500, but Glorytun, when entering the "path up" command, changes the MTU of the client tun0 interface and the server interface to MTU 1357. The glorytun show command displays the MTU value of 1357, the glorytun path command displays the MTU 1379. Manually change the MTU on the interface, after the "glorytun path" nothing is updated in the displayed glorytun metrics, the value changes only in ifconfig. I know that the values should be 1450 and 1472, but in my basic setup, without any significant changes in the OS, these values do not correspond to those expected when Glorytun is running in normal mode. Please, tell me what could be the problem and can I fix it on my own or is it a bug?
Specification: OS: Debian Buster Glorytun: binary release 0.3.4 Network Interfaces MTUs: 1500 Rate: auto or any fixed rate Cipher: any
glorytun show server tun0: pid: 3045 bind: 192.168.254.5 port 5000 mtu: 1357 cipher: chacha20poly1305 glorytun path path UP status: OK bind: 192.168.254.5 port 5000 public: 192.168.254.5 port 5000 peer: 192.168.254.6 port 5000 mtu: 1379 bytes rtt: 1107.740 ms rttvar: 2031.065 ms rate: auto losslim: 100 beat: 100 ms tx: rate: 0 bytes/sec loss: 0 percent total: 48 packets rx: rate: 0 bytes/sec loss: 0 percent total: 96 packets