wangyu- / UDPspeeder

A Tunnel which Improves your Network Quality on a High-latency Lossy Link by using Forward Error Correction, possible for All Traffics(TCP/UDP/ICMP)
MIT License
4.73k stars 831 forks source link

Recommended parameters for low loss high jitter links #261

Open codemarauder opened 4 years ago

codemarauder commented 4 years ago

Hello,

This is not an issue, but question / clarification. I couldn't find any mailing list or forum for UDPSpeeder, thus posting here.

I am testing OpenVPN over UDPspeeder in a live setup as below:

OpenVPN has following parameters on both the tunnels:

sndbuf 2000000
rcvbuf 2000000
txqueuelen 4000

UDPSpeeder arguments for Tunnel 1 are as below:

/usr/bin/udpspeeder -s --fec 1:3,2:4,8:6,20:10 --fix-latency --disable-obscure -l 1xx.x7x.x8x.xx:4095 -r 127.0.0.1:443 --mode 0 --report 10 --interval 0 --mtu 1250 --sock-buf 1024 --decode-buf 2000 --queue-len 200 --log-level 4 --fifo /tmp/udpspeeder-hotunnel.fifo

/usr/bin/udpspeeder -c --fec 1:3,2:4,8:6,20:10 --fix-latency --disable-obscure -l 0.0.0.0:3333 -r 1xx.x2x.8xx.1xx:4095 --mode 0 --report 60 --interval 0 --mtu 1250 --sock-buf 1024 --decode-buf 2000 --queue-len 200 --log-level 4 --fifo /tmp/udpspeeder-hotunnel.fifo

Ping without UDPSpeeder:

# ping 10.48.0.1
PING 10.48.0.1 (10.48.0.1): 56 data bytes
64 bytes from 10.48.0.1: seq=0 ttl=64 time=13.924 ms
64 bytes from 10.48.0.1: seq=1 ttl=64 time=12.398 ms
64 bytes from 10.48.0.1: seq=2 ttl=64 time=11.959 ms
64 bytes from 10.48.0.1: seq=3 ttl=64 time=11.811 ms
64 bytes from 10.48.0.1: seq=4 ttl=64 time=11.698 ms
64 bytes from 10.48.0.1: seq=5 ttl=64 time=11.604 ms
64 bytes from 10.48.0.1: seq=6 ttl=64 time=10.453 ms
64 bytes from 10.48.0.1: seq=7 ttl=64 time=10.175 ms
64 bytes from 10.48.0.1: seq=8 ttl=64 time=11.290 ms
^C

Ping with UDPSpeeder:

# ping 10.48.0.1
PING 10.48.0.1 (10.48.0.1): 56 data bytes
64 bytes from 10.48.0.1: seq=2 ttl=64 time=28.584 ms
64 bytes from 10.48.0.1: seq=3 ttl=64 time=29.673 ms
64 bytes from 10.48.0.1: seq=4 ttl=64 time=33.644 ms
64 bytes from 10.48.0.1: seq=5 ttl=64 time=29.607 ms
64 bytes from 10.48.0.1: seq=6 ttl=64 time=27.816 ms
64 bytes from 10.48.0.1: seq=7 ttl=64 time=27.790 ms
64 bytes from 10.48.0.1: seq=8 ttl=64 time=24.014 ms
64 bytes from 10.48.0.1: seq=9 ttl=64 time=29.140 ms
64 bytes from 10.48.0.1: seq=10 ttl=64 time=29.866 ms
64 bytes from 10.48.0.1: seq=11 ttl=64 time=29.780 ms
64 bytes from 10.48.0.1: seq=12 ttl=64 time=29.791 ms
64 bytes from 10.48.0.1: seq=13 ttl=64 time=23.917 ms
64 bytes from 10.48.0.1: seq=14 ttl=64 time=26.137 ms
64 bytes from 10.48.0.1: seq=15 ttl=64 time=23.878 ms
64 bytes from 10.48.0.1: seq=16 ttl=64 time=35.369 ms
64 bytes from 10.48.0.1: seq=17 ttl=64 time=29.405 ms
64 bytes from 10.48.0.1: seq=18 ttl=64 time=28.959 ms
64 bytes from 10.48.0.1: seq=19 ttl=64 time=28.797 ms
64 bytes from 10.48.0.1: seq=20 ttl=64 time=29.869 ms
64 bytes from 10.48.0.1: seq=21 ttl=64 time=29.765 ms
64 bytes from 10.48.0.1: seq=22 ttl=64 time=28.376 ms
64 bytes from 10.48.0.1: seq=23 ttl=64 time=30.878 ms
64 bytes from 10.48.0.1: seq=24 ttl=64 time=31.012 ms
64 bytes from 10.48.0.1: seq=25 ttl=64 time=32.085 ms
64 bytes from 10.48.0.1: seq=26 ttl=64 time=31.148 ms
64 bytes from 10.48.0.1: seq=27 ttl=64 time=31.866 ms
64 bytes from 10.48.0.1: seq=28 ttl=64 time=29.322 ms
64 bytes from 10.48.0.1: seq=29 ttl=64 time=29.176 ms
64 bytes from 10.48.0.1: seq=30 ttl=64 time=29.093 ms
64 bytes from 10.48.0.1: seq=31 ttl=64 time=28.486 ms
64 bytes from 10.48.0.1: seq=32 ttl=64 time=55.349 ms
64 bytes from 10.48.0.1: seq=33 ttl=64 time=25.001 ms
64 bytes from 10.48.0.1: seq=34 ttl=64 time=23.336 ms
64 bytes from 10.48.0.1: seq=35 ttl=64 time=23.563 ms
^C

The jitter value over UDPSpeeder is 329.92 ms while max latency rises to 7643 ms without any packet loss.

Below are the arguments for UDPSpeeder for Tunnel 2:

/usr/bin/udpspeeder -s --fec 1:3,2:4,8:6,20:10 --fix-latency --disable-obscure -l 2xx.2xx.2xx.2x4:4096 -r 127.0.0.1:444 --mode 0 --report 10 --interval 0 --mtu 1250 --sock-buf 1024 --decode-buf 2000 --queue-len 200 --log-level 4 --fifo /tmp/udpspeeder-hotunnel2.fifo

/usr/bin/udpspeeder -c --fec 1:3,2:4,8:6,20:10 --fix-latency --disable-obscure -l 0.0.0.0:3334 -r 2xx.x6x.x2x.2xx:4096 --mode 0 --report 60 --interval 0 --mtu 1250 --sock-buf 1024 --decode-buf 2000 --queue-len 200 --log-level 4 --fifo /tmp/udpspeeder-hotunnel2.fifo

Ping without UDPSpeeder on Tunnel 2:

# ping 10.49.0.1
PING 10.49.0.1 (10.49.0.1): 56 data bytes
64 bytes from 10.49.0.1: seq=0 ttl=64 time=36.586 ms
64 bytes from 10.49.0.1: seq=1 ttl=64 time=38.395 ms
64 bytes from 10.49.0.1: seq=2 ttl=64 time=35.373 ms
64 bytes from 10.49.0.1: seq=3 ttl=64 time=83.523 ms
64 bytes from 10.49.0.1: seq=4 ttl=64 time=93.564 ms
64 bytes from 10.49.0.1: seq=5 ttl=64 time=72.503 ms
64 bytes from 10.49.0.1: seq=6 ttl=64 time=41.593 ms
64 bytes from 10.49.0.1: seq=7 ttl=64 time=63.527 ms
64 bytes from 10.49.0.1: seq=8 ttl=64 time=31.090 ms
64 bytes from 10.49.0.1: seq=9 ttl=64 time=75.551 ms
64 bytes from 10.49.0.1: seq=10 ttl=64 time=28.639 ms
64 bytes from 10.49.0.1: seq=11 ttl=64 time=38.666 ms
64 bytes from 10.49.0.1: seq=12 ttl=64 time=37.238 ms
64 bytes from 10.49.0.1: seq=13 ttl=64 time=60.410 ms
64 bytes from 10.49.0.1: seq=14 ttl=64 time=64.432 ms
64 bytes from 10.49.0.1: seq=15 ttl=64 time=12.102 ms
64 bytes from 10.49.0.1: seq=16 ttl=64 time=38.800 ms
64 bytes from 10.49.0.1: seq=17 ttl=64 time=28.919 ms
64 bytes from 10.49.0.1: seq=18 ttl=64 time=95.656 ms
64 bytes from 10.49.0.1: seq=19 ttl=64 time=56.683 ms
64 bytes from 10.49.0.1: seq=20 ttl=64 time=28.722 ms
64 bytes from 10.49.0.1: seq=21 ttl=64 time=30.872 ms
64 bytes from 10.49.0.1: seq=22 ttl=64 time=47.057 ms
64 bytes from 10.49.0.1: seq=23 ttl=64 time=38.317 ms
64 bytes from 10.49.0.1: seq=24 ttl=64 time=32.550 ms
64 bytes from 10.49.0.1: seq=25 ttl=64 time=234.696 ms
64 bytes from 10.49.0.1: seq=26 ttl=64 time=205.693 ms
64 bytes from 10.49.0.1: seq=27 ttl=64 time=92.903 ms
64 bytes from 10.49.0.1: seq=28 ttl=64 time=18.923 ms
64 bytes from 10.49.0.1: seq=29 ttl=64 time=31.810 ms
64 bytes from 10.49.0.1: seq=30 ttl=64 time=37.028 ms
64 bytes from 10.49.0.1: seq=31 ttl=64 time=40.238 ms
64 bytes from 10.49.0.1: seq=32 ttl=64 time=63.864 ms
64 bytes from 10.49.0.1: seq=33 ttl=64 time=28.745 ms
64 bytes from 10.49.0.1: seq=34 ttl=64 time=58.406 ms
64 bytes from 10.49.0.1: seq=35 ttl=64 time=32.674 ms
64 bytes from 10.49.0.1: seq=36 ttl=64 time=18.620 ms
64 bytes from 10.49.0.1: seq=37 ttl=64 time=47.204 ms
64 bytes from 10.49.0.1: seq=38 ttl=64 time=62.092 ms
64 bytes from 10.49.0.1: seq=39 ttl=64 time=36.553 ms
64 bytes from 10.49.0.1: seq=40 ttl=64 time=40.235 ms

Ping with UDPSpeeder on Tunnel 2:

# ping 10.49.0.1
PING 10.49.0.1 (10.49.0.1): 56 data bytes
64 bytes from 10.49.0.1: seq=0 ttl=64 time=34.837 ms
64 bytes from 10.49.0.1: seq=1 ttl=64 time=78.843 ms
64 bytes from 10.49.0.1: seq=2 ttl=64 time=373.022 ms
64 bytes from 10.49.0.1: seq=3 ttl=64 time=111.565 ms
64 bytes from 10.49.0.1: seq=4 ttl=64 time=77.686 ms
64 bytes from 10.49.0.1: seq=5 ttl=64 time=276.565 ms
64 bytes from 10.49.0.1: seq=6 ttl=64 time=100.155 ms
64 bytes from 10.49.0.1: seq=7 ttl=64 time=80.903 ms
64 bytes from 10.49.0.1: seq=8 ttl=64 time=44.760 ms
64 bytes from 10.49.0.1: seq=9 ttl=64 time=68.887 ms
64 bytes from 10.49.0.1: seq=10 ttl=64 time=125.835 ms
64 bytes from 10.49.0.1: seq=11 ttl=64 time=121.595 ms
64 bytes from 10.49.0.1: seq=12 ttl=64 time=37.860 ms
64 bytes from 10.49.0.1: seq=13 ttl=64 time=136.609 ms
64 bytes from 10.49.0.1: seq=14 ttl=64 time=73.801 ms
64 bytes from 10.49.0.1: seq=15 ttl=64 time=61.137 ms
64 bytes from 10.49.0.1: seq=16 ttl=64 time=46.088 ms
64 bytes from 10.49.0.1: seq=17 ttl=64 time=83.405 ms
64 bytes from 10.49.0.1: seq=18 ttl=64 time=159.873 ms
64 bytes from 10.49.0.1: seq=19 ttl=64 time=92.944 ms
64 bytes from 10.49.0.1: seq=20 ttl=64 time=86.759 ms
64 bytes from 10.49.0.1: seq=21 ttl=64 time=87.281 ms
64 bytes from 10.49.0.1: seq=22 ttl=64 time=41.240 ms
64 bytes from 10.49.0.1: seq=23 ttl=64 time=59.838 ms
64 bytes from 10.49.0.1: seq=24 ttl=64 time=65.037 ms
64 bytes from 10.49.0.1: seq=25 ttl=64 time=69.177 ms
64 bytes from 10.49.0.1: seq=26 ttl=64 time=68.880 ms
64 bytes from 10.49.0.1: seq=27 ttl=64 time=166.241 ms
64 bytes from 10.49.0.1: seq=28 ttl=64 time=196.163 ms
64 bytes from 10.49.0.1: seq=29 ttl=64 time=96.150 ms
64 bytes from 10.49.0.1: seq=30 ttl=64 time=131.482 ms
64 bytes from 10.49.0.1: seq=31 ttl=64 time=38.139 ms
64 bytes from 10.49.0.1: seq=32 ttl=64 time=62.813 ms
64 bytes from 10.49.0.1: seq=33 ttl=64 time=54.571 ms
64 bytes from 10.49.0.1: seq=34 ttl=64 time=103.473 ms
64 bytes from 10.49.0.1: seq=35 ttl=64 time=75.707 ms
64 bytes from 10.49.0.1: seq=36 ttl=64 time=101.432 ms
^C

SCP over UDPSpeeder on Tunnel 2:

# scp root@10.49.0.1:/root/log.txt /tmp/
root@10.49.0.1's password: 
log.txt                                                                                                         100% 8381KB 155.2KB/s   00:54 

SCP without UDPSpeeder on Tunnel 2:

# scp root@10.49.0.1:/root/log.txt /tmp/
root@10.49.0.1's password: 
log.txt                                                                                                         100% 8381KB 254.0KB/s   00:33

UDPSpeeder report from server side:

Thu Sep 17 12:15:30 2020 daemon.info udpspeeder[28531]: [2020-09-17 12:15:30][INFO][report][xx.xx.xx.xx:45276]client-->server:(original:4620 pkt;502992 byte) (fec:27567 pkt;1798805 byte)  server-->client:(original:4626 pkt;503640 byte) (fec:27706 pkt;1807608 byte)
Thu Sep 17 12:15:41 2020 daemon.info udpspeeder[28531]: [2020-09-17 12:15:41][INFO][report][xx.xx.xx.xx:45276]client-->server:(original:4663 pkt;507652 byte) (fec:27825 pkt;1815623 byte)  server-->client:(original:4672 pkt;508544 byte) (fec:27976 pkt;1825062 byte)
Thu Sep 17 12:15:51 2020 daemon.info udpspeeder[28531]: [2020-09-17 12:15:51][INFO][report][xx.xx.xx.xx:45276]client-->server:(original:4783 pkt;521028 byte) (fec:28547 pkt;1861678 byte)  server-->client:(original:4784 pkt;521056 byte) (fec:28658 pkt;1869022 byte)
Thu Sep 17 12:16:01 2020 daemon.info udpspeeder[28531]: [2020-09-17 12:16:01][INFO][report][xx.xx.xx.xx:45276]client-->server:(original:4911 pkt;534884 byte) (fec:29309 pkt;1911304 byte)  server-->client:(original:4911 pkt;534804 byte) (fec:29426 pkt;1919038 byte)
Thu Sep 17 12:16:12 2020 daemon.info udpspeeder[28531]: [2020-09-17 12:16:12][INFO][report][xx.xx.xx.xx:45276]client-->server:(original:5039 pkt;548740 byte) (fec:30063 pkt;1960272 byte)  server-->client:(original:5040 pkt;548784 byte) (fec:30198 pkt;1969224 byte)
Thu Sep 17 12:16:22 2020 daemon.info udpspeeder[28531]: [2020-09-17 12:16:22][INFO][report][xx.xx.xx.xx:45276]client-->server:(original:5167 pkt;562628 byte) (fec:30820 pkt;2009669 byte)  server-->client:(original:5169 pkt;562780 byte) (fec:30972 pkt;2019726 byte)
Thu Sep 17 12:16:32 2020 daemon.info udpspeeder[28531]: [2020-09-17 12:16:32][INFO][report][xx.xx.xx.xx:45276]client-->server:(original:5302 pkt;577528 byte) (fec:31617 pkt;2062004 byte)  server-->client:(original:5306 pkt;577896 byte) (fec:31792 pkt;2073840 byte)
Thu Sep 17 12:16:43 2020 daemon.info udpspeeder[28531]: [2020-09-17 12:16:43][INFO][report][xx.xx.xx.xx:45276]client-->server:(original:5430 pkt;591384 byte) (fec:32384 pkt;2111955 byte)  server-->client:(original:5434 pkt;591752 byte) (fec:32560 pkt;2123856 byte)
Thu Sep 17 12:16:53 2020 daemon.info udpspeeder[28531]: [2020-09-17 12:16:53][INFO][report][xx.xx.xx.xx:45276]client-->server:(original:5559 pkt;605348 byte) (fec:33145 pkt;2161516 byte)  server-->client:(original:5562 pkt;605608 byte) (fec:33328 pkt;2173872 byte)
Thu Sep 17 12:17:04 2020 daemon.info udpspeeder[28531]: [2020-09-17 12:17:04][INFO][report][xx.xx.xx.xx:45276]client-->server:(original:5688 pkt;619312 byte) (fec:33915 pkt;2211662 byte)  server-->client:(original:5690 pkt;619464 byte) (fec:34094 pkt;2223620 byte)

I can see that packet is not lost and UDPSpeeder tries hard though latency increased to 7643 ms.

The questions:

  1. Since there will not be packet losses always, how can I turn FEC off and on dynamically? Otherwise, I can modify OpenVPN config on the fly and reload the tunnel. My scenario is Hub-Spoke where there is one Hub and 70+ spokes. I can not turn FEC off at server side as at a given time one of the spokes will be experiencing packet loss. Please advise.
  2. Latency increasing from 10-15 ms without UDPSpeeder to 27-30 ms with UDPSpeeder is normal with 1:3,2:4,8:6,20:10?
  3. Is 1:3,2:4,8:6,20:10 with default timeout value is an overkill when we only want to manage 10% packet loss?
  4. How to choose ideal values depending on bandwidth available and the upper limit of packet loss that we want to mitigate? e.g. if a link has more than 20% packet loss, I would tear the tunnel down over that link and prefer traffic over the other tunnel to keep CPU and bandwidth utilisation in check.
  5. Why do I get low transfer rate on SCP over UDPSpeeder as compared to a direct OpenVPN tunnel?

Thanks in advance.

PS: I have created init script and uci config file for OpenWrt. Will submit a PR soon.

wangyu- commented 4 years ago
  1. Since there will not be packet losses always, how can I turn FEC off and on dynamically? Otherwise, I can modify OpenVPN config on the fly and reload the tunnel. My scenario is Hub-Spoke where there is one Hub and 70+ spokes. I can not turn FEC off at server side as at a given time one of the spokes will be experiencing packet loss. Please advise.

there is an option --pipe allows you to change parameters without restart the program. Although at the moment it doesn't allow you toggle the disable-fec option (I can consider add this in furture), but it allows you to change parameter to --mode 1 --f1:0 which is similar to completely disable fec.

  1. Latency increasing from 10-15 ms without UDPSpeeder to 27-30 ms with UDPSpeeder is normal with 1:3,2:4,8:6,20:10?

This is completely as expected. There is an option name --timeout with a default value of 10(ms). That option means, UDPspeeder will try to collect as many packet as possible in the 10ms before doing the FEC. Before doing FEC all packets are held in UDPspeeder buffer, so a maximum latency of 10ms is introduced for both direction.

ping measures roundtrip latency, so 10ms maximum is doubled to 20ms maximal. (might be less, just 20ms maximal)

  1. Is 1:3,2:4,8:6,20:10 with default timeout value is an overkill when we only want to manage 10% packet loss?

I think it depends on what is the max after-FEC-packet loss you can accept. If you are sure the max packet loss is 10%, and never increase above, then it might be a bit overkilled.

The after-FEC packet loss is calculate-able, please check this link: https://github.com/wangyu-/UDPspeeder/wiki/FEC%E4%B8%A2%E5%8C%85%E7%8E%87%E8%AE%A1%E7%AE%97

  1. How to choose ideal values depending on bandwidth available and the upper limit of packet loss that we want to mitigate? e.g. if a link has more than 20% packet loss, I would tear the tunnel down over that link and prefer traffic over the other tunnel to keep CPU and bandwidth utilisation in check.

This is a similiar question as above, I don't have a general answer. Depends on how much packet loss can you accept, and how much bandwidth do you have, how much computing power do you have. Maybe you need to findout what is the bottleneck(bandwidth or cpu?), and you balance between the bottoleneck and packet-loss.

  1. Why do I get low transfer rate on SCP over UDPSpeeder as compared to a direct OpenVPN tunnel?

according to your measurement on the second link:

Tunnel 2:
The other link belongs to different ISPs at both the ends, hence there is jitter of 42ms, ~7% loss, max latency 445 ms, avg latency 59.837 ms and min latency 9.543 ms for a long duration ping (over 450 pings)

The jitter value over UDPSpeeder is 329.92 ms while max latency rises to 7643 ms without any packet loss.

The speed of TCP depends on both packet-loss and RTT(latency). In this case you reduced packets, but increased a lot of RTT. Althouth I can't really unserstand why your latency increased so much, but the speed of TCP is kind of explainable.


Below are some comments not directly answering your question:

1

--fix-latency is currenly marked as a developer option, it doesn't fully implement what it claims to do. It might signifcantly increase latency in some case, I suggest you to not use it.

2

on your first link the latency is fully explainable, but it's not very explainable on the second. I will try to explain a bit:

The average lantency is explainable. In mode 0, all packet in a FEC group are equal. For example, if you have 20 packets, and you are using -f20:10, then on the encoding side UDPspeeder generated 30 equal packets for the original packets and send them via internet (the original 20 packets are NOT send). On the decoding side, UDPspeeder will collect at least 20 FEC packets (of the 30) to recover the 20 original. Since the redunancy of 20:10 is small. You after-FEC-latency is detemined by the slowest packet of the first 20 FEC packets you collect. You lanteny is kind of dragged back by the slowest one (in first 20 of 30).

I suggest you to remove --fix-latency and test with a very high reduncy such as -f10:30 (just for a test, I am not suggesting you to use this on a production environment). I think in this case, your average latency may be significantly reduce. This might help you understand the relation between latency and redundancy on a high jitter link, then you can figure out more by yourself.

The max latency increased is very unexplainable, unless:

  1. your CPU is full
  2. your link has a larger latency when you are sending more packets. (this is common on wifi environments)

3

--mode 1 can protentially reduce latency. In the above example, the after-FEC-latency is no longer (fully) determinged by the slowest packet of the first 20 FEC packets you collect.

The disadvantage of mode 1 over mode 0 is that mode 1 doesn't support user space packet fragmenting. With --mode 1 you have to guarentee the packets you feed into UDPspeeder is smaller than MTU. But since you are using openvpn, openvpn itself can do user space packet fragmenting, so there is no problem for your case.

Currently there is no good english document that explains the difference of --mode in-depth, I will add some later. (there are serveral in chinese though.)

wangyu- commented 4 years ago

4

Another option worth mention is --interval x. The default values is 0(ms), which means it's disabled. With this option enable, saying 15ms, (still use the above example) the 30 packets in the same fec group will be scattered into a time window of 15ms.

Although on a first glimpse this option seems to increase your latency. Actually it can protect you from burst packet-loss and burst latency/jitter. It may eventually reduce latency (or it may not, depends on the reason of jitter).

codemarauder commented 4 years ago

Thanks for your response and patience for a detailed explanation.

I have made following changes to udpspeeder parameters:

/usr/bin/udpspeeder -s --fec 1:3,2:4,8:6,20:10 --disable-obscure -l 1xx.7x.x4.1x:4095 -r 127.0.0.1:443 --mode 1 --report 10 --interval 15 --sock-buf 1024 --decode-buf 2000 --log-level 4 --fifo /tmp/udpspeeder-hotunnel.fifo --disable-color

/usr/bin/udpspeeder -c --fec 1:3,2:4,8:6,20:10 --disable-obscure -l 127.0.0.1:3333 -r 1x2.7x.x4.1x:4095 --mode 1 --report 60 --interval 15 --sock-buf 1024 --decode-buf 2000 --log-level 4 --fifo /tmp/udpspeeder-hotunnel.fifo --disable-color

Changing to --mode 1 has immediately brought the latency down to nearly the latency without UDPSpeeder.

# ping 10.48.0.1
PING 10.48.0.1 (10.48.0.1): 56 data bytes
64 bytes from 10.48.0.1: seq=0 ttl=64 time=12.587 ms
64 bytes from 10.48.0.1: seq=1 ttl=64 time=12.320 ms
64 bytes from 10.48.0.1: seq=2 ttl=64 time=13.468 ms
64 bytes from 10.48.0.1: seq=3 ttl=64 time=15.868 ms
64 bytes from 10.48.0.1: seq=4 ttl=64 time=11.891 ms
64 bytes from 10.48.0.1: seq=5 ttl=64 time=11.808 ms
64 bytes from 10.48.0.1: seq=6 ttl=64 time=11.637 ms
64 bytes from 10.48.0.1: seq=7 ttl=64 time=21.719 ms
64 bytes from 10.48.0.1: seq=8 ttl=64 time=11.312 ms
64 bytes from 10.48.0.1: seq=9 ttl=64 time=19.945 ms
64 bytes from 10.48.0.1: seq=10 ttl=64 time=12.305 ms
64 bytes from 10.48.0.1: seq=11 ttl=64 time=38.404 ms
64 bytes from 10.48.0.1: seq=12 ttl=64 time=12.003 ms
64 bytes from 10.48.0.1: seq=13 ttl=64 time=26.915 ms
64 bytes from 10.48.0.1: seq=14 ttl=64 time=11.708 ms
64 bytes from 10.48.0.1: seq=15 ttl=64 time=11.600 ms
64 bytes from 10.48.0.1: seq=16 ttl=64 time=11.450 ms
64 bytes from 10.48.0.1: seq=17 ttl=64 time=16.293 ms
64 bytes from 10.48.0.1: seq=18 ttl=64 time=12.489 ms
64 bytes from 10.48.0.1: seq=19 ttl=64 time=12.207 ms
64 bytes from 10.48.0.1: seq=20 ttl=64 time=13.444 ms
64 bytes from 10.48.0.1: seq=21 ttl=64 time=19.666 ms
64 bytes from 10.48.0.1: seq=22 ttl=64 time=16.973 ms
64 bytes from 10.48.0.1: seq=23 ttl=64 time=19.153 ms
^C

But I saw following messages in the logs:

Sat Sep 19 11:39:32 2020 daemon.info udpspeeder[28917]: [2020-09-19 11:39:32][WARN]mode==1,message len=1308,len>fec_mtu,fec_mtu=1250,packet may not be delivered Sat Sep 19 11:39:32 2020 daemon.info udpspeeder[28917]: [2020-09-19 11:39:32][WARN]mode==1,message len=1388,len>fec_mtu,fec_mtu=1250,packet may not be delivered

To me they seem harmless warnings as there is no packet loss, even with large sized ping:

# ping -s 17000 10.48.0.1
PING 10.48.0.1 (10.48.0.1): 17000 data bytes
17008 bytes from 10.48.0.1: seq=0 ttl=64 time=45.369 ms
17008 bytes from 10.48.0.1: seq=1 ttl=64 time=43.933 ms
17008 bytes from 10.48.0.1: seq=2 ttl=64 time=52.481 ms
17008 bytes from 10.48.0.1: seq=3 ttl=64 time=49.454 ms
17008 bytes from 10.48.0.1: seq=4 ttl=64 time=42.931 ms
17008 bytes from 10.48.0.1: seq=5 ttl=64 time=45.226 ms
17008 bytes from 10.48.0.1: seq=6 ttl=64 time=49.939 ms
17008 bytes from 10.48.0.1: seq=7 ttl=64 time=46.956 ms
17008 bytes from 10.48.0.1: seq=8 ttl=64 time=47.953 ms
17008 bytes from 10.48.0.1: seq=9 ttl=64 time=43.969 ms
17008 bytes from 10.48.0.1: seq=10 ttl=64 time=44.901 ms
17008 bytes from 10.48.0.1: seq=11 ttl=64 time=45.870 ms
17008 bytes from 10.48.0.1: seq=12 ttl=64 time=43.068 ms
17008 bytes from 10.48.0.1: seq=13 ttl=64 time=49.514 ms
17008 bytes from 10.48.0.1: seq=14 ttl=64 time=44.771 ms
17008 bytes from 10.48.0.1: seq=15 ttl=64 time=54.823 ms
17008 bytes from 10.48.0.1: seq=16 ttl=64 time=46.831 ms
17008 bytes from 10.48.0.1: seq=17 ttl=64 time=46.595 ms
^C

I have following configuration on OpenVPN on Tunnel 1 (can't use fragment as the OpenVPN server is serving 70 other spokes):

mssfix 1200
sndbuf 2000000
rcvbuf 2000000
txqueuelen 4000

The explanation for very high latency on the other link can be attributed to the radio link which was increasing latency under load and that day when I posted my query, the link was in bad shape.

The bandwidth is not guaranteed due to radio issues and thus any results on this link may not be deterministic.

I have yet to perform test with high redundancy of 10:30. I would prefer to do it on the better link when I can take a downtime.