adrienverge / openfortivpn

Client for PPP+TLS VPN tunnel services
GNU General Public License v3.0
2.67k stars 319 forks source link

Limited download speed on MacOS Mojave #428

Closed jeduardo closed 3 years ago

jeduardo commented 5 years ago

When I download files from a server behind the VPN using openfortivpn my download speeds are capped around 2.5mb/sec. When I perform the same download behind the official Fortinet client, I can maximize my connection speed (around 10mb/sec).

Any ideas what might be happening there? The problem is reproducible on different MacOS laptops when downloading the same files.

zez3 commented 4 years ago

The FCT loads it's own KEXT on mac

image

but uses also libcrypto.1.1.dylib and libssl.1.1.dylib

image

javerous commented 4 years ago

@zez3 Yes, I didn't activate the kext on my side (when it asks the user to authorize the load), so I'm not sure for what it's used.

Anyway, I'm not sure they do anything that Apple don't with hardware, at least for TLS.

mrbaseman commented 4 years ago

With FortiClient the MTU is 1500:

$ ifconfig ppp0
ppp0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1500
[...]
$ 

that's an interesting observation, especially since you show that a ping packet with 1501 bytes does not pass, and there is some overhead of an additional tcp stream wrapped into TLS encrypted data stream. That's the reason why normally the MTU on the tunnel interface is reduced by the header size.

lizell commented 4 years ago

I have noticed that for larger files it starts with a really capped bandwidth (10x), but speeds up after 3-5s. This does not happen with the commercial GUI version.

zez3 commented 4 years ago

commercial GUI version

The vpn part is free on the FCT, proprietary but free

zez3 commented 4 years ago

I've did some further tests on my Forti just to rule this one out:

the official client choose low security ciphers which are x8 quicker than the default used by OpenSSL / macOS APIs, even considering hardware optimization. But I have strong doubts about this.

So the openforticlient negotiated Cipher Suite: TLS_AES_256_GCM_SHA384

image

and the official proprietary FCT initially tried to negotiate Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 but then eventually settled for the same Crypto Suite Cipher Suite: TLS_AES_256_GCM_SHA384 image

I can exclude this from my tests but perhaps someone who has this speed issue should test this. You need wireshark installed with the BPF or tcpdump

Perhaps I must also state that I ban on my Forti the following cyphers: 3DES AESGMC CAMELIA RSA SHA1 and STATIC

and also I use FCT 6.2.4 (The older than 6.2.3 versions hat diffent bugs including cypher negotiations)

DimitriPapadopoulos commented 4 years ago

I have noticed this part of the code: https://github.com/adrienverge/openfortivpn/blob/cfcc420f2aaeceab353319bd2db9d12c27958448/src/io.c#L613-L627

Please bear in mind I know close to nothing on network performance, I've just read a couple online articles:

TCP_NODELAY is supposed to be efficient especially for large downloads. Yet it might be worth it investigating performance with/without TCP_NODELAY or TCP_NOPUSH on macOS.

DimitriPapadopoulos commented 4 years ago

I have also read this online article:

I know openfortivpn links with the OpenSSL library from Homebrew, not the LibreSSL library from Apple. I also do understand the above article refers to the LibreSSL library from Apple. Therefore the speed bump between macOS 10.14.4 and 10.14.5 is probably not relevant in our case. Nevertheless it would be worth:

zez3 commented 4 years ago

It took me some time to gather this but here it is: I did a lot more tests with the openfortivpn compared to the new official linux FortiClient EMS with SSL VPN implementation(there is no free version yet, you need an TAC account) All tests have been done with an FGT 3960E in production. One slight improvement that I saw was when I had to restart the daemon of sslvpnd because httpsd also seg fault crashed.

I always used iperf with different run times 10, 20, 40 seconds and chose the max speed. I performed the tests a few times at different hours during the course of 3 days

On linux I monitored the /etc/pppd directory and see who is using it and it seems that the FCT does not use pppd. Most probably they wrote their own pppd/hdlc flavor. The same goes for on Mac.

on linux they use libgcrypt:

$ldd /opt/forticlient/fortivpn
    linux-vdso.so.1 =>  (0x00007fff37dd1000)
    libsecret-1.so.0 => /usr/lib/x86_64-linux-gnu/libsecret-1.so.0 (0x00007fa0dfb06000)
    libglib-2.0.so.0 => /lib/x86_64-linux-gnu/libglib-2.0.so.0 (0x00007fa0df7f5000)
    libanl.so.1 => /lib/x86_64-linux-gnu/libanl.so.1 (0x00007fa0df5f1000)
    libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fa0df3ed000)
    libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fa0df1d0000)
    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fa0dee06000)
    /lib64/ld-linux-x86-64.so.2 (0x00007fa0dfd55000)
    libgcrypt.so.20 => /lib/x86_64-linux-gnu/libgcrypt.so.20 (0x00007fa0deb25000)
    libgio-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgio-2.0.so.0 (0x00007fa0de79d000)
    libgobject-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0 (0x00007fa0de54a000)
    libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007fa0de2da000)
    libgpg-error.so.0 => /lib/x86_64-linux-gnu/libgpg-error.so.0 (0x00007fa0de0c6000)
    libgmodule-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgmodule-2.0.so.0 (0x00007fa0ddec2000)
    libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fa0ddca8000)
    libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007fa0dda86000)
    libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 (0x00007fa0dd86b000)
    libffi.so.6 => /usr/lib/x86_64-linux-gnu/libffi.so.6 (0x00007fa0dd663000)
$ readelf -d /opt/forticlient/fortivpn | grep 'NEEDED'
 0x0000000000000001 (NEEDED)             Shared library: [libsecret-1.so.0]
 0x0000000000000001 (NEEDED)             Shared library: [libglib-2.0.so.0]
 0x0000000000000001 (NEEDED)             Shared library: [libanl.so.1]
 0x0000000000000001 (NEEDED)             Shared library: [libdl.so.2]
 0x0000000000000001 (NEEDED)             Shared library: [libpthread.so.0]
 0x0000000000000001 (NEEDED)             Shared library: [libc.so.6]
 0x0000000000000001 (NEEDED)             Shared library: [ld-linux-x86-64.so.2]

Speed tests: on EOL Ubuntu16.04.6 with official Linux FCT 6.2.4 I always got Negotiated Cipher Suite: TLS_AES_256_GCM_SHA384 (0x1302) libgcrypt20 Version 1.6.5-2ubuntu0.6 myuser@UbuVirt:~$ iperf -c myserver -P 10 -m | tail -n 2 [ 9] MSS size 1348 bytes (MTU 1388 bytes, unknown interface) [SUM] 0.0-10.1 sec 371 MBytes 360 Mbits/sec

vpn Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:172.x.x.x P-t-P:x.x.x.x Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1400 Metric:1 RX packets:88030 errors:0 dropped:0 overruns:0 frame:0 TX packets:251558 errors:0 dropped:573 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:4672048 (4.6 MB) TX bytes:351917510 (351.9 MB)

with openfortivpn 1.3.0 Negotiated Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (0xc028) OpenSSL 1.0.2g 1 Mar 2016 (I tried to force negotiate --cipher-list= TLS_AES_256_GCM_SHA384 but with the default old openssl version is not possible) ppp0 Link encap:Point-to-Point Protocol
inet addr:172.x.x.x P-t-P:1.1.1.1 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1354 Metric:1 RX packets:211482 errors:0 dropped:0 overruns:0 frame:0 TX packets:451684 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:11069791 (11.0 MB) TX bytes:611136178 (611.1 MB) myuser@UbuVirt:~$ iperf -c myserver -P 10 -m | tail -n 2 [ 8] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [SUM] 0.0-10.3 sec 158 MBytes 172 Mbits/sec

So half the speed difference with the old openssl and old openfortivpn.

Next on Debian Kali-roling with openfortivpn 1.13.3 Negotiated Cipher Suite: TLS_AES_256_GCM_SHA384 (0x1302) OpenSSL 1.1.1g 21 Apr 2020 (I tried to force downgrade using --cipher-list=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 but this openssl version has dropped CBC which is also no longer secure)

ppp0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1354 inet 172.x.x.x netmask 255.255.255.255 destination 192.0.2.1 ppp txqueuelen 3 (Point-to-Point Protocol) RX packets 180797 bytes 12531750 (11.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 593697 bytes 801637662 (764.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

root@kali:~# iperf -c myserv -P 10 -m | tail -n 2 [ 7] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [SUM] 0.0-10.4 sec 364 MBytes 293 Mbits/sec

with official Linux FCT 6.2.4 I got again Negotiated Cipher Suite: TLS_AES_256_GCM_SHA384 (0x1302) libgcrypt20:amd64 1.8.5-5

vpn: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1400 inet 172.x.x.x netmask 255.255.255.255 destination 172.x.x.x unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC) RX packets 318104 bytes 19857725 (18.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1339379 bytes 1872842555 (1.7 GiB) TX errors 0 dropped 6751 overruns 0 carrier 0 collisions 0

root@kali:~# iperf -c myserv -P 10 -m | tail -n 2 [ 4] MSS size 1348 bytes (MTU 1388 bytes, unknown interface) [SUM] 0.0-10.1 sec 578 MBytes 478 Mbits/sec and best case I got ~500Mbps

I also did some tests on an Scientific Linux release 7.8 (redhat based distro) libgcrypt.x86_64 1.5.3-14.el7 OpenSSL 1.0.2k-fips 26 Jan 2017 forticlient-6.2.6.0356-1.el7.centos.x86_64 (The server/cli version 'cuz the gui version did not worked) openfortivpn-1.13.3-1.el7.x86_64 with pretty much the same values ~280 Mbits/sec and with the FCT ~500Mbps

All my test VMs had 4 CPUs assigned and the nic in bridge mode running on the same host machine. Without any VPN connected iperf was close to my 1Gbps nic speed I know I am not comparing apple with apple here being different libraries(openssl vs libgcrypt) and different versions but I guess old lib version do have some impact on speed. I have to do some app profiling on linux but the KCachegrind / QCachegrind+valgrind looks a bit non-intuitive comapared to the XCode Instruments thing. Or I just need more time to learn how to use it.

On my physical iMac model 2013 I get pretty much the same speed on both VPN clients MacOS Catalina 10.15.4 (19E287)

FCT 6.2.6.737 Interesting enough on mac they chose to use openssl:

$ otool -L /Library/Application\ Support/Fortinet/FortiClient/bin/sslvpnd 
/Library/Application Support/Fortinet/FortiClient/bin/sslvpnd:
    /usr/lib/libc++.1.dylib (compatibility version 1.0.0, current version 800.7.0)
    /System/Library/Frameworks/IOKit.framework/Versions/A/IOKit (compatibility version 1.0.0, current version 275.0.0)
    /usr/lib/libiconv.2.dylib (compatibility version 7.0.0, current version 7.0.0)
    /usr/lib/libz.1.dylib (compatibility version 1.0.0, current version 1.2.11)
    /Library/Application Support/Fortinet/FortiClient/bin/libcrypto.1.1.dylib (compatibility version 1.1.0, current version 1.1.0)
    /Library/Application Support/Fortinet/FortiClient/bin/libssl.1.1.dylib (compatibility version 1.1.0, current version 1.1.0)
    /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (compatibility version 150.0.0, current version 1673.126.0)
    /System/Library/Frameworks/SystemConfiguration.framework/Versions/A/SystemConfiguration (compatibility version 1.0.0, current version 1061.40.2)
    /System/Library/Frameworks/Carbon.framework/Versions/A/Carbon (compatibility version 2.0.0, current version 162.0.0)
    /System/Library/Frameworks/Security.framework/Versions/A/Security (compatibility version 1.0.0, current version 59306.41.2)
    /System/Library/Frameworks/Cocoa.framework/Versions/A/Cocoa (compatibility version 1.0.0, current version 23.0.0)
    /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1281.0.0)
    /System/Library/Frameworks/CFNetwork.framework/Versions/A/CFNetwork (compatibility version 1.0.0, current version 0.0.0)
    /System/Library/Frameworks/CoreServices.framework/Versions/A/CoreServices (compatibility version 1.0.0, current version 1069.11.0)
    /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation (compatibility version 300.0.0, current version 1673.126.0)
    /usr/lib/libobjc.A.dylib (compatibility version 1.0.0, current version 228.0.0)

Let's try to see the version:

mac:~# strings /Library/Application\ Support/Fortinet/FortiClient/bin/libssl.1.1.dylib | grep 1.1
...
OpenSSL 1.1.1b  26 Feb 2019
mac:~# strings /Library/Application\ Support/Fortinet/FortiClient/bin/libcrypto.1.1.dylib |  grep "1.1"
...
OpenSSL 1.1.1b  26 Feb 2019

ppp0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1354 index 14 eflags=1002080<TXSTART,NOAUTOIPV6LL,ECN_ENABLE> inet 172.x.x.x --> 169.254.38.179 netmask 0xffff0000 state availability: 0 (true) scheduler: FQ_CODEL link rate: 230.40 Kbps qosmarking enabled: no mode: none low power mode: disabled multi layer packet logging (mpklog): disabled

imac:$ iperf -c myserv -P 10 -m -t 20 | tail -n 2 [ 12] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [SUM] 0.0-20.1 sec 792 MBytes 369 Mbits/sec

image FCT_xcode_Instruments.txt

openfortivpn 1.13.2 using openssl 1.0.1t 3 May 2016 imac$ ifconfig -v ppp0 ppp0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1354 index 14 eflags=1002080<TXSTART,NOAUTOIPV6LL,ECN_ENABLE> inet 172.x.x.x --> 192.0.2.1 netmask 0xffff0000 state availability: 0 (true) scheduler: FQ_CODEL link rate: 115.20 Kbps qosmarking enabled: no mode: none low power mode: disabled multi layer packet logging (mpklog): disabled

$ iperf -c 130.92.9.40 -P 10 -t 40 -m | tail -n 2 [ 12] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [SUM] 0.0-40.0 sec 1.61 GBytes 346 Mbits/sec

image

openfortivpn_xcode_instruments.txt

From my observations there is not big difference here on my Catalina. Perhaps it was the case with the older versions like @DimitriPapadopoulos said here https://github.com/adrienverge/openfortivpn/issues/428#issuecomment-614808435 Both VPN Clients use HDLC framing which I don't even know if it's needed. Can we not drop this L2 encapsulation altogether and point the VPN routes behind an different loopback like/virtual ethernet interface ? like so: https://stackoverflow.com/questions/87442/virtual-network-interface-in-mac-os-x

Haarolean commented 3 years ago

This issue affects me and bunch of my colleagues who use openfortivpn. Speeds are extremely slow, like 200 kb/s.

zez3 commented 3 years ago

This issue affects me and bunch of my colleagues who use openfortivpn. Speeds are extremely slow, like 200 kb/s.

What you could try is to run some tests directly from the FGT See: https://weberblog.net/iperf3-on-a-fortigate/

Haarolean commented 3 years ago

This issue affects me and bunch of my colleagues who use openfortivpn. Speeds are extremely slow, like 200 kb/s.

What you could try is to run some tests directly from the FGT See: https://weberblog.net/iperf3-on-a-fortigate/

That's a difficult task, since I'm just a user and have no access to forti hw at all. Approving running iperf there would be a nontrivial task, also it won't solve anything, just prove the issue. There are some serious issues with openforti which are not present in official client.

DimitriPapadopoulos commented 3 years ago

Indeed these are know issues. We just need someone with a Mac to address them - or at least find a possible explanation for these speed issues.

wiremangr commented 3 years ago

Please check in the file tunnel.c in lines 233 to 247. I have changed the line 235 which defines the ppp speed to "20000000" instead of "115200" and recompiled again. The difference is noticeable in the vpn during RDP sessions as the system is responding much faster. Maybe it has something to do with the internal workings of macos queuing of packets.

I have noticed that the official Forticlient while connected in macos catalina with ifconfig -v ppp0 gives a link rate of 230.40 Kbps.

After the change i get a rate of 20.00 Mbps with ifconfig and the response of the remote system is much better. Bellow is the code segment i have tried with the change in speed in the file tunnel.c

                  static const char *const v[] = {
                                ppp_path,
                                //"115200", // speed
                                "20000000",
                                ":192.0.2.1", // <local_IP_address>:<remote_IP_address>
                                "noipdefault",
                                "noaccomp",
                                "noauth",
                                "default-asyncmap",
                                "nopcomp",
                                "receive-all",
                                "nodefaultroute",
                                "nodetach",
                                "lcp-max-configure", "40",
                                "mru", "1354"
                        };

Testing was done with Macos Catalina ver 10.15.7 with 50Mbps DSL line.

wiremangr commented 3 years ago

Update for the previous post with results from a download of a 600Mb file from a remote system with sftp via openfortivpn:

Using speed 115200 sftp download has a rate around 642.2KB/s Using speed 20000000 sftp download maxed out my DSL line at 5.2MB/s

Testing was done between the same local and remote system and the same vpn gateway.

DimitriPapadopoulos commented 3 years ago

Strange, I'm certain @mrbaseman had already tried changing the speed option in the past without effect.

By the way, we cannot change this speed option on Linux in the same way because as written in the man page:

An option that is a decimal number is taken as the desired baud rate for the serial device. On systems such as 4.4BSD and NetBSD, any speed can be specified. Other systems (e.g. Linux, SunOS) only support the commonly-used baud rates.

Therefore, we will guard this change with #ifdef. But then what happens if you remove the speed option altogether on macOS?

wiremangr commented 3 years ago

If i remove the speed option, the tunnel cannot be established. pppd returns with unrecognized option '' and openfortivpn shows WARN: read returned 0 until it is stopped . It seems that defining speed is mandatory.

I have run some more tests today with different speeds. It seems that setting the speed to 230400 or higher, the responsiveness of RDP and download speeds are greatly increased. It may help if the default setting of 115200 is increased to 230400 or 460800 if it is compatible with other systems also.

Today tests results with different speeds:

ppp0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1354 index 13 eflags=1002080<TXSTART,NOAUTOIPV6LL,ECN_ENABLE> inet 10.x.x.x --> 192.0.2.1 netmask 0xffffff00 state availability: 0 (true) scheduler: FQ_CODEL link rate: 3.07 Mbps qosmarking enabled: no mode: none low power mode: disabled multi layer packet logging (mpklog): disabled

openfortivpn 3072000 -> 5.1MB/s to 5.5MB/s sftp download

ppp0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1354 index 13 eflags=1002080<TXSTART,NOAUTOIPV6LL,ECN_ENABLE> inet 10.x.x.x --> 192.0.2.1 netmask 0xffffff00 state availability: 0 (true) scheduler: FQ_CODEL link rate: 460.80 Kbps qosmarking enabled: no mode: none low power mode: disabled multi layer packet logging (mpklog): disabled

openfortivpn 460800 -> 5.1MB/s to 5.5MB/s sftp download

ppp0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1354 index 13 eflags=1002080<TXSTART,NOAUTOIPV6LL,ECN_ENABLE> inet 10.x.x.x --> 192.0.2.1 netmask 0xffffff00 state availability: 0 (true) scheduler: FQ_CODEL link rate: 230.40 Kbps qosmarking enabled: no mode: none low power mode: disabled multi layer packet logging (mpklog): disabled

openfortivpn 230400 -> 5.1MB/s to 5.5MB/s sftp download

ppp0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1354 index 13 eflags=1002080<TXSTART,NOAUTOIPV6LL,ECN_ENABLE> inet 10.x.x.x --> 192.0.2.1 netmask 0xffffff00 state availability: 0 (true) scheduler: FQ_CODEL link rate: 115.20 Kbps qosmarking enabled: no mode: none low power mode: disabled multi layer packet logging (mpklog): disabled

openfortivpn 115200 -> 622.5KB/s to 640KB/s sftp download

DimitriPapadopoulos commented 3 years ago

OK, I do seem to recall the speed option isn't really optional. It has to be there.

On Linux I thought the highest available baud-rate for consoles is 115200, but higher baud-rates for other serial devices, listed in <asm-generic/termbits>. For example on CentOS 6:

[...]
#define  B50    0000001
#define  B75    0000002
#define  B110   0000003
#define  B134   0000004
#define  B150   0000005
#define  B200   0000006
#define  B300   0000007
#define  B600   0000010
#define  B1200  0000011
#define  B1800  0000012
#define  B2400  0000013
#define  B4800  0000014
#define  B9600  0000015
#define  B19200 0000016
#define  B38400 0000017
[...]
#define    B57600 0010001
#define   B115200 0010002
#define   B230400 0010003
#define   B460800 0010004
#define   B500000 0010005
#define   B576000 0010006
#define   B921600 0010007
#define  B1000000 0010010
#define  B1152000 0010011
#define  B1500000 0010012
#define  B2000000 0010013
#define  B2500000 0010014
#define  B3000000 0010015
#define  B3500000 0010016
#define  B4000000 0010017
[...]

The baud-rate passed to pppd does not seem to be taken into account on Linux - or at least it does not limit the speed. We could perhaps use 20000000, except I believe pppd expects only predefined values such as 9600, 19200, 38400, 57600, 115200, 230400, 460800. I'll double-check what's acceptable on Linux.

Depending on acceptable values of speed on Linux, we could use 20000000 or even higher values on macOS.

DimitriPapadopoulos commented 3 years ago

It looks like not only 4000000 works, but so does 20000000! At least that's the case with recent Linux distributions. Even 2147483647 or 9223372036854775807 work. I find this disturbing because I do see code that checks valid speeds in pppd: https://github.com/paulusmack/ppp/blob/ad3937a/pppd/sys-linux.c#L796-L943

But then, if it works, who cares? Perhaps the above code is not in the execution path in the absence of a real serial port.

DimitriPapadopoulos commented 3 years ago

I suggest we use 2147483647 (the value of INT_MAX on 32-bit systems) as pppd does not seem to be enforcing baud rates in this use-case. Indeed, specific baud rates are only enforced in _set_uptty(), which I suspect is not called in this use-case as we shouldn't need to "set up the serial port".

rkirkpat commented 3 years ago

Was testing out openfortivpn today on my MacOS 10.14.6 (Mojave) system and encountered this issue with v1.15.0 installed via Brew. With a 40mbps (down) DSL connection, was only getting about 600kB/sec on an SCP over the VPN compared to nearly 4MB/sec with Fortinet's client. I cloned the git repo for openfortivpn, switched to the v1.15.0 tag, applied the fix proposed above to src/tunnel.c, rebuilt, and re-tested. I was now indeed getting 4MB/sec over the openfortivpn session! So yes, the fix works!

Note, used the v1.15.0 tag as building master resulted in a getsockopt error. I will open another issue about that shortly.

DimitriPapadopoulos commented 3 years ago

@zez3 Does patch #820 help?

Haarolean commented 3 years ago

Thank you guys for fixing this! Much appreciated. May I ask, when the next release will be published containing this fix?

DimitriPapadopoulos commented 3 years ago

@Haarolean Not certain yet about the release. I'd like to see #826 fixed first, which will probably require a few days of work.

Have you been able to test this change (you need to build openfortivpn for that)? If so would you be able to test whether baud rates of 230400, 576000 and 2147483647 result in different speeds?

Meroje commented 3 years ago

Hi, I ran tests on macOS 10.15.7, no too sure about the fortigate but it is on a 10G pipe.

baseline ![Screenshot_2021-01-14 Internet Speed Test - Measure Latency Jitter Cloudflare](https://user-images.githubusercontent.com/304101/104580393-85b34e00-565d-11eb-9de6-ba2d4930b98a.png) A little lower than expected, I can get up to 890/660 if testing across the city
openfortivpn 1.15.0 ![Screenshot_2021-01-14 Internet Speed Test - Measure Latency Jitter Cloudflare(3)](https://user-images.githubusercontent.com/304101/104580353-7502d800-565d-11eb-8f3c-5e6f3eda7001.png)
openfortivpn HEAD-b123e99 ![Screenshot_2021-01-14 Internet Speed Test - Measure Latency Jitter Cloudflare(1)](https://user-images.githubusercontent.com/304101/104580646-cd39da00-565d-11eb-864f-a583e759947c.png)
ipsec (using the builtin macos client) ![Screenshot_2021-01-14 Internet Speed Test - Measure Latency Jitter Cloudflare(2)](https://user-images.githubusercontent.com/304101/104580703-df1b7d00-565d-11eb-9ed2-d9e3f187eb35.png)

Changing baud rates didn't yield any change to the results.

Haarolean commented 3 years ago

@DimitriPapadopoulos I haven't tested the changes because I wanted to get it on brew first. Being not very familiar with C/CPP I'm not quite sure how do I compile it to make it present only in brew directory. Readme states that I should do ./configure --prefix=/usr/local --sysconfdir=/etc, do I have to replace both options with brew subdirectories in case of brew binaries being being installed in /opt/brew instead of /usr/local?

P.S. I could just wait for the release if Meroje's tests are enough for you.

DimitriPapadopoulos commented 3 years ago

@Meroje Thank you so much for all these tests. Since changing baud rates doesn't yield any change to the results, and since 230400 has been reported elsewhere to be the value used by FortiClient itself, I believe we have the proper fix.

Would you be able to test VPN SSL in addition to VPN IPSec with FortiClient? It has been reported elsewhere that openfortivpn has been in par with FortiClient in VPN SSL mode after this change. Hopefully that will be the case for you too.

Meroje commented 3 years ago

It's been a year or two since Forticlient was last usable, which is why we transitioned to openfortivpn before ultimately using ipsec.

image
zez3 commented 3 years ago

https://docs.fortinet.com/document/fortigate/6.0.0/hardware-acceleration/177344/np6-np6xlite-and-np6lite-acceleration

https://docs.fortinet.com/document/fortigate/6.0.0/hardware-acceleration/149012/np6-session-fast-path-requirements

The traffic that can be offloaded, maximum throughput, and number of network interfaces supported by each varies by processor model:

NP7 supports offloading of most IPv4 and IPv6 traffic, IPsec VPN encryption (including Suite B), SSL VPN encryption, GTP traffic, CAPWAP traffic, VXLAN traffic, and multicast traffic. The NP7 has a maximum throughput of 200 Gbps using 2 x 100 Gbps interfaces. For details about the NP7 processor, see NP7 acceleration and for information about FortiGate models with NP7 processors, see FortiGate NP7 architectures.
NP6 supports offloading of most IPv4 and IPv6 traffic, IPsec VPN encryption, CAPWAP traffic, and multicast traffic."

If your FGT has NP7 it should be able to offload SSL if you have NP6 then you are stuck with IPSEC offloading only.

This could be the reason behind the speed problem. Reproduced and confirmed by my test. Also some smaller models do not have NPUs(ASICs) https://docs.fortinet.com/document/fortigate/6.2.0/cookbook/661836/vpn-and-asic-offload

IPsec traffic might be processed by the CPU for the following reasons:

Some low end models do not have NPUs.
NPU offloading and CP IPsec traffic processing manually disabled.

Or could have it disabled for whatever reason.

As it seems the SSL VPN traffic is never offloaded or accelerated. It all goes through the CPU and depending on the crypto suites used in the VPN configuration they could or not be accelerated by the CPUs AES-NI instructions.

DimitriPapadopoulos commented 3 years ago

@zez3 Thank you for the above thorough research, next time a user complains I will ask the FortiGate model and the FortiOS version.

That said, the question remains why only macOS users complained, not Linux users. Of course Linux users only have VPN SSL, whether they use openfortivpn or FortiClient, while macOS users have both VPN SSL and IPSec at their disposal. But then the initial report by @jeduardo states that both openfortivpn and FortiClient are using SSL VPN in his case. Perhaps macOS mistakenly believed both openfortivpn and FortiClient use VPN SSL in their case?

Haarolean commented 3 years ago

Last version has fixed the issue for me, now there's stable 10 mb/s connection via HTTP and 5 mb/s via SSH. Thank you folks very much for this!