Closed jeduardo closed 3 years ago
Does the native Fortinet client use IPsec or SSL VPN?
Hey @DimitriPapadopoulos. Both the openfortivpn and the native Fortinet client as using SSL VPN for these connections.
Ah right, I came across IPsec VPN with the native Mac OS client but it probably refers to the default VPN client built into the Mac OS, not the Fortinet client.
Maybe it's better optimization, usage of hardware-supported encryption/decryption? If that's the explanation, a follow-up question would be how can we improve the compiler flags that autoconf finds out? Maybe we can suggest a few options that it would try if they are supported...
Thread parallelization could be an issue, but openfortivpn is already multithreaded. Doing more than one read thread and one write thread would make things very complicated, especially one would have to know what the remote side does ;) Maybe locking is a problem. There are a few hacks about semaphores in the MacOSX code. Maybe those are not quite optimal.
Maybe we have packet fragmentation. Does the native client on MacOS use a different MTU?
@mrbaseman Wouldn't hardware-supported encryption/decryption depend on the underlying SSL library?
About MTU, see for example:
@DimitriPapadopoulos Oh, yes, you are right. If it's an optimization issue of that kind, one would have to put some effort into the ssl library. But I would be astonished if the openssl from Macports or Homebrew didn't use the hardware acceleration when it's available.
How to test if OpenSSL is hardware-accelerated on macOS: Is there any part of OSX that gets a significant speed boost from Intel AES instructions?
I have also this problem. There isn't any CPU pike, so could it be a lack of hardware support? I don't know how it could affect performances but I saw that the routing table with openfortivpn doesn't look like the one I have with forticlientsslvpn.
@jeduardo By curiosity, what is the version of the Fortinet agent you're using, and do you have alike routing tables with both clients? I use a little app called forticlientsslvpn, version 4, with copyright 2006-2014.
edit :
My ratio is poorer. I have 1,7 Mbps vs. 11 Mbps, downloading a 8 MB file. I made captures and I see a few TCP Spurious Retransmissions and full TCP windows that I don't see with the official client. But I don't want to hijack jeduardo's bug report.
I've tried to check it on my Macbook but Forticlient doesn't want to connect :-/ So I can not compare the MTU and MRU settings.
Anyhow, for the connection made with openfortivpn I have noticed that networksetup -getMTU ...
only works on physical devices, not on the ppp0 device for the tunnel.
But maybe I have found another hint: I stumbled over the speed value that we pass to pppd
. It is set to 38400, but maybe 115200 could give a better performance? I'm not sure to which extend this baud rate setting of the serial device has a real influence and if pppd
and the server side finally agree on that value. But if so, the ratio of the two standard values promises a factor of 3 which would at least roughly match the observed difference in performance.
The --pppd-call
command line parameter can be used to pass a script that contains the desired settings. This can be used for testing different pppd parameters without having to re-compile each time.
I have merged #444 on the current master. Maybe this helps?
Sorry guys, I'm unable to test it anymore as I no longer have access to a fortinet VPN.
The
--pppd-call
command line parameter can be used to pass a script that contains the desired settings. This can be used for testing different pppd parameters without having to re-compile each time.
@mrbaseman Could you give me an advice on what I could put in this script? I tried with speed 115200 but it gives me:
ERROR: pppd: An error was detected in processing the options given, such as two mutually exclusive options being used.
I tried also openfortivpn --pppd-ipparam='speed 115200' but I saw no difference in speed.
I wanted to avoid compilation. :-)
@JPlanche now that I have merged the pull request you can simply download the current master and compile that one.
the ipparam is something that pppd takes and hands over to the ip-up/ip-down scripts which are executed when the tunnel interface is brought up or down.
With --pppd-call
you can pass an options file (if your pppd implementation supports that), from which the calling options are read. speed 115200
would go in there, but also all the other options needed to bring pppd up.
But as mentioned before, comparing openfortivpn <= 1.9.0 with the current master should already do the job (if this change has any effect at all)
@JPlanche now that I have merged the pull request you can simply download the current master and compile that one.
@mrbaseman I have these brew formulaes installed: automake autoconf openssl@1.0 pkg-config.
I exported the LDFLAGS and CPPFLAGS variables.
I have openssl in the PATH:
$ openssl version OpenSSL 1.0.2s 28 May 2019
But configure fails on:
checking for libssl >= 0.9.8 libcrypto >= 0.9.8... no
configure: error: Cannot find OpenSSL 0.9.8 or higher.
Sorry, I'm not very used to compilation. :-) I did a search but didn't find any clue...
Adrien has tagged the 1.10.0 release on the weekend and it has already been picked up by Homebrew.
So, a simple brew update
should install the new release.
About the configure error that you see: the configure script uses pkg-config to check if openssl is installed in a reasonably new version (some enterprise distributions still backport fixes for 0.9.8). But somehow pkg-config fails to find the openssl package. You may need to set
export PKG_CONFIG_PATH="/usr/local/opt/openssl/lib/pkgconfig:$PKG_CONFIG_PATH"
that's at least what my Homebrew recommends when I install openssl, but for some reason I didn't have to do that on my system.
So, a simple
brew update
should install the new release.
Well, that's simplier indeed (for me).
I did a quick test with 1.10.0.
Downloading a 38MB file via HTTPS to /dev/null:
hmm... this is still a factor of more than ten slower with openfortivpn, whereas the forticlient nearly reaches the speed that you see without vpn. So we still have an issue here. Unfortunately, my forticlient doesn't want to connect for some reason, so I can't compare, but I can see the limit of about 2.5 MB/s that was reported by the original author of this issue.
on my linux laptop I can reach around 40 MB/s so it's either related to old hardware (which hasn't yet encryption support in the cpu) or to the OS type. BTW I have tested with scp of a 130 MB file over Gigabit Ethernet. Without VPN (however routed through a small Fortigate) the file is transferred in 1 second, so a speed of 130 MB/s has to be taken with a grain of salt. Anyhow, it all performs much better than on the old mac that I have.
We have gathered some experience about the download speed through an SSL VPN connection the last days here at work. We have a Fortigate 90D and there we see around 2.5 MB/s for scp through SSL VPN on Linux. So, we have studied data sheets and forum posts and had a look at the configuration. Two things that limit the speed which are often mentioned are:
In our speed tests I have seen a high CPU load on the Fortigate every time when I have started a transfer through the SSL VPN. ipsec by nature of the protocol offers better performance. With SSL VPN we send TCP packets encapsulated into an encrypted data stream that again goes over a TCP connection, and a lot of weird things can happen. Well, but ipsec is more complicated to configure, especially when you have customers who are supposed to configure their own client. In that case it should be as easy as possible, and provisioning the configuration to the client also works only with the commercial client and only when they connect in a different manner, e.g. directly to a wifi that belongs to the "security fabric". So, well, the only option that we have seen is throwing more compute power at the problem, and we have tested a newer model, and a much larger one. Either a similar sized model of a newer series, or within a series a larger model with more cores and perhaps additional ASICS, both can help to address the problem of the limited speed. Comparing openfortivpn against Forticlient, DTLS might have an impact. Maybe the Forticlient opens several threads for the download (openfortivpn has one thread for each communication direction and some others for recieving the configuration etc.). Something which also may impact the speed is an Antivirus scan on the Fortigate, which may limit the speed of the first download, and when the same file is accessed another time, the URL is already cached for some time as harmless content.
I'm experiencing this too and by a similar factor of slowdown. Let me know if there's any info I can collect to help.
~80KB/s download with openfortivpn, ~1.8MB/s on commerical client
Hi @earltedly
As a first step you could double-check the MTU setting on the ppp interface and see if the commercial client uses the same value (unfortunately it refuses to install on my old MacBook).
Another topic that might impact the performance could be the proper choice of the ciphers. Currently, we use the following default:
cipher-list = HIGH:!aNULL:!kRSA:!PSK:!SRP:!MD5:!RC4
The insecure ones which we exclude probably provide a better performance, but that would be a bad choice. Maybe we should exclude some others for performance reasons. Openssl has different settings depending on the version of the library, e.g. for 1.0.2 it is
ALL:!EXPORT:!LOW:!aNULL:!eNULL:!SSLv2
or maybe replacing HIGH
by MEDIUM
in our default setting, namely:
MEDIUM:!aNULL:!kRSA:!PSK:!SRP:!MD5:!RC4
could be a good choice. The bottleneck can be on both sides. It may be the Mac (where we could test with openssl speed
, but it can as well be on the Fortigate - if the client suggests to use secure ciphers and FortiOS chooses one which is not supported in hardware, then we have the situation where the client settings have a large impact on the system load on the remote side.
So, some performance numbers for different ciphers would be good input.
openssl ciphers DEFAULT
gives a long list of known ciphers. Well, one has to find the common set of client and server, but I would bet that this is still a lengthy list.
I have done some testing myself. On Linux (when connecting to a big Fortigate) I can reach good throughput rates (measured with scp), and indeed I see a dependence on the ciphers:
DHE-RSA-CAMELLIA256-SHA 12.4MB/s
CAMELLIA256-SHA 17.4MB/s
AES256-GCM-SHA384 19.3MB/s
AES256-SHA 19.3MB/s
AES256-SHA256 19.3MB/s
DHE-RSA-AES256-SHA 19.3MB/s
ECDHE-RSA-AES256-GCM-SHA384 19.3MB/s
ECDHE-RSA-AES256-SHA 19.3MB/s
DHE-RSA-AES256-GCM-SHA384 24.9MB/s
DHE-RSA-AES256-SHA256 24.9MB/s
ECDHE-RSA-AES256-SHA384 29.0MB/s
On OS X El Capitan I must admit that I see very poor rates, too, by default 2.2 MB/s,
and I have noticed that when openfortivpn is called without -v
option the terminal is much less busy and the throughput more than doubles to 4.6 MB/s.
I haven't seen much performance improvement when I specify one of the ciphers that look promising on Linux, some of them even don't work. Well, it's a little bit a different openssl version, probably configured differently, and also older hardware...
I just needed to say that SHA1, CAMELLIA and RSA should not be used cuz they are not safe anymore. I am glad that the openfortivpn works with higher SHA integrity because at the moment(6.2.1) neither windows or macOS official FortiClient are able to negotiate that. Shame on them. Worst there is no DTLS support for the macOS version. :angry: The DTLS implementation on the openfortivpn would be a very desired addition and I for one in the name of the institution that I represent I would/could partially finance that.
Well, Openssl classifies the ciphers, and some really bad ones are not activated anymore at compile time. The ones mentioned above currently are in MEDIUM I think (but it also depends on the version you take, the 1.0.2 LTS release is approaching EOL now).
Thanks for your offer of financial support for the DTLS implementation. Unfortunately, we are a very small team of volunteers here in the project, so time for looking into new topics is a quite rare resource. Anyhow, if any volunteer comes up and provides a pull request we are happy to review and test it.
And well... maybe it's even not that much work to implement it, because as mentioned here Openssl already supports DTLS, so it's maybe just a question of finding out if the server supports it, too and switching it on if possible.
Hopefully this can help anyone interested in DTLS:
Hi ! Just as a quick note (I have the same problem).
I replaced the OpenSSL code with Apple macOS Security framework (Secure Transport, SSLCreateContext
etc.): I have the exact same result, so slowness doesn't seem related to SSL…
I also played a bit with pppd
settings, but I don't know it well, and I always have the same results.
Hopefully this can help anyone interested in DTLS:
* [DTLS with OpenSSL](https://chris-wood.github.io/2016/05/06/OpenSSL-DTLS.html)
I understand that this should point out the easy way of implementing dtls but perhaps that example particularly is not the ideal one to choose. Please see: https://github.com/chris-wood/dtls-test/issues/7
This would be amazing to have in the new world of remote working/VPN, anything I can do to implement this? Follow an example on how to get OpenSSL to do DTLS?
an example on how to get OpenSSL to do DTLS
There are different test implementations/examples in github https://github.com/search?l=C&o=desc&p=4&q=dtls&s=forks&type=Repositories Most make use of openssl like this VPN Client: with single thread dtls https://github.com/juanAngel/tinydtls ( most recent from the 14 forks) others have tried multipath dtls https://github.com/MultipathDTLS/mpdtls-vpn
I was reading this comment here: https://gist.github.com/Jxck/b211a12423622fe304d2370b1f1d30d5#gistcomment-3119541 Then I saw this: https://github.com/stepheny/openssl-dtls-custom-bio Not sure how or if it solved the bio issue
I hope it helps a bit
But except those questions about DTLS API, do we know how it actually works with FortiVPN ?
I guess we have to start with a TCP+TLS tunnel between openfortivpn <-> gateway for all HTTP things, as it's currently done, and then we need to create a new DTLS tunnel for gateway <-> pppd data exchanges ? How is this tunnel authenticated / configured ?
Plus, are we sure that DTLS would really solve the problem ? Why there is no problem with TCP+TLS on Linux ?
@javerous using HP RGS remote desktop software there are reports of better responsiveness using the DTLS mode of Forti on windows/mac which is the biggest benefit that I'm after.
Outside of just enabling DTLS, is there a possibility that Forti might be doing something unique between the client/server that would require reverse-engineering?
Hard to say what they used on windows to implement dtls. Cuz that is the only FCT supporting it. Perhaps they did not use openssl at all. I'll have to hack a bit into that VPN client. What I could say and Forti is always proudly stating is that they respect the IETF standards. So on that matter the server should act according to the dtls 1.2 standard.
are we sure that DTLS would really solve the problem ?
Probably not the speed one. My interest on this to get that udp connection drop toleration for commuting(train,bus,etc) users. We should move this discussion to the https://github.com/adrienverge/openfortivpn/issues/473
@boberfly @zez3 Okay. So it's what I did : I exchanged all OpenSSL & standard Posix networking code in ssl.h & tunnel.{h|c} by new macOS networking API : https://developer.apple.com/documentation/network?language=objc
Basically, you just have to tell to this API to which server you want to connect, and send / receive data to / from it, it manage everything else for you.
Switching to TCP/TLS to DTLS is just a matter of parameter switching : https://developer.apple.com/documentation/network/nw_parameters_t?language=objc
nw_parameters_create_secure_tcp
nw_parameters_create_secure_udp
So, my implementation is a bit quick & dirty, but it work well when using "secure_tcp", but doesn't work at all when using "secure_udp".
Which mean that either I did something wrong (but it's hard to do a mistake, specifically considering it work with TCP), either it's more complex than switching everything to DTLS…
In fact, I would be surprised if the HTTP "bootstraping" (login, configuration, etc.) use UDP, but then, if it's not the case, I don't know how to do things, as the "HTTP tunnel" is "converted" to the "VPN tunnel" by sending /remote/sslvpn-tunnel
HTTP request…
Apparently the server I'm playing with support DTLS (reported by /remote/fortisslvpn_xml
), so I don't think it's a point.
I can attach my changes if any of you is interested, but it's really just a quick PoC — it ignore lot of configuration things, by example.
@javerous Very interesting! Letting aside DTLS, are you able to achieve higher speeds when switching to the macOS networking API (the initial object of this issue)? It would be very helpful indeed if you could compare download speeds with either implementation..
Switching to DTLS is a different issue that can probably be achieved either with the new macOS networking API or with the POSIX API - even though the macOS API might be easier.
@DimitriPapadopoulos So, I didn't strictly measure, but the speed doesn't look better (i.e. it's just far slower than with official clients).
That being said, it's not really a surprise : I already did some tests some time ago by using an older macOS API which work a bit more like OpenSSL (i.e you have to handle the whole networking part, and this API handle the TLS protocol part). The speed was not better.
See my comment there https://github.com/adrienverge/openfortivpn/issues/428#issuecomment-561135409
In fact, this new macOS Network API is probably slower than macOS Security API because it's an async API (while Security API is sync, like OpenSSL), so I had to add some overhead (semaphores, essentially) to make it fit synchronous code of openfortivpn.
Anyway, as said, it makes me think the problem is not TLS implementation itself (or hardware optimization, or things like this).
Can it be the way pppd is implemented on macOS ? Or pty ? I don't know enough pppd…
I will attach the two implementations I did (with the new Network API and the old Security API), just in case.
I don't know much about pppd and I don't have a mac. I really don't know, sorry.
Also googling slow VPN macOS I came across a few references to MTU size. Since I don't have a mac I am unable to look into it.
Since I don't have a mac I am unable to look into it.
You don't need a mac hw per see. I use a virtualbox VM for my tests and excepting audio it works perfectly. They even brought the guest tools on mac now with version 6, Aldough you have to disable SIP. The latest Catalina update came a few days ago and was updated without any issues. These guys(https://techsviewer.com/install-macos-10-15-catalina-on-virtualbox-on-windows-pc/) provide the vbox image as well.
@javerous How did you tested the speed/bandwidth? Iperf? There you can play around with the mtu,window size and other stuff
Funny thing. I just tested on my mac VM and I don't actually see any big difference between FCT and openforticlient
[ 9] local 172.30.160.1 port 49342 connected with 130.92.x.x port 5001 [ 13] local 172.30.160.1 port 49334 connected with 130.92.x.x port 5001 [ 16] local 172.30.160.1 port 49333 connected with 130.92.x.x port 5001 [ 8] local 172.30.160.1 port 49335 connected with 130.92.x.x port 5001 [ 17] local 172.30.160.1 port 49339 connected with 130.92.x.x port 5001 [ 12] local 172.30.160.1 port 49338 connected with 130.92.x.x port 5001 [ 11] local 172.30.160.1 port 49336 connected with 130.92.x.x port 5001 [ 10] local 172.30.160.1 port 49337 connected with 130.92.x.x port 5001 [ 6] local 172.30.160.1 port 49340 connected with 130.92.x.x port 5001 [ 7] local 172.30.160.1 port 49341 connected with 130.92.x.x port 5001 [ ID] Interval Transfer Bandwidth [ 10] 0.0-10.0 sec 4.62 MBytes 3.87 Mbits/sec [ 10] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [ 9] 0.0-10.1 sec 4.12 MBytes 3.43 Mbits/sec [ 9] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [ 16] 0.0-10.1 sec 4.75 MBytes 3.93 Mbits/sec [ 16] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [ 8] 0.0-10.2 sec 4.75 MBytes 3.92 Mbits/sec [ 8] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [ 11] 0.0-10.2 sec 4.75 MBytes 3.92 Mbits/sec [ 11] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [ 6] 0.0-10.2 sec 4.75 MBytes 3.92 Mbits/sec [ 6] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [ 13] 0.0-10.2 sec 4.75 MBytes 3.91 Mbits/sec [ 13] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [ 12] 0.0-10.2 sec 4.75 MBytes 3.90 Mbits/sec [ 12] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [ 7] 0.0-10.2 sec 4.75 MBytes 3.90 Mbits/sec [ 7] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [ 17] 0.0-10.3 sec 4.75 MBytes 3.86 Mbits/sec [ 17] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [SUM] 0.0-10.3 sec 46.8 MBytes 38.0 Mbits/sec
This was with openforticlient(1.13.2)
And this is with the official FCT(6.2.4):
[ 8] local 172.30.160.1 port 49370 connected with 130.92.x.x port 5001 [ 14] local 172.30.160.1 port 49376 connected with 130.92.x.x port 5001 [ 9] local 172.30.160.1 port 49369 connected with 130.92.x.x port 5001 [ 12] local 172.30.160.1 port 49368 connected with 130.92.x.x port 5001 [ 11] local 172.30.160.1 port 49373 connected with 130.92.x.x port 5001 [ 6] local 172.30.160.1 port 49372 connected with 130.92.x.x port 5001 [ 7] local 172.30.160.1 port 49374 connected with 130.92.x.x port 5001 [ 10] local 172.30.160.1 port 49371 connected with 130.92.x.x port 5001 [ 13] local 172.30.160.1 port 49375 connected with 130.92.x.x port 5001 [ 16] local 172.30.160.1 port 49367 connected with 130.92.x.x port 5001 [ ID] Interval Transfer Bandwidth [ 12] 0.0-10.0 sec 6.50 MBytes 5.44 Mbits/sec [ 12] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [ 7] 0.0-10.0 sec 1.38 MBytes 1.15 Mbits/sec [ 7] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [ 10] 0.0-10.0 sec 6.50 MBytes 5.44 Mbits/sec [ 10] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [ 16] 0.0-10.0 sec 5.50 MBytes 4.60 Mbits/sec [ 16] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [ 9] 0.0-10.1 sec 6.25 MBytes 5.21 Mbits/sec [ 9] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [ 8] 0.0-10.1 sec 6.25 MBytes 5.19 Mbits/sec [ 8] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [ 6] 0.0-10.1 sec 7.62 MBytes 6.33 Mbits/sec [ 6] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [ 13] 0.0-10.1 sec 6.25 MBytes 5.18 Mbits/sec [ 13] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [ 14] 0.0-10.2 sec 2.00 MBytes 1.65 Mbits/sec [ 14] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [ 11] 0.0-10.3 sec 768 KBytes 610 Kbits/sec [ 11] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [SUM] 0.0-10.3 sec 49.0 MBytes 39.8 Mbits/sec
For reference I also did an test without any VPN client mtu to normal 1500 size This is the max that my VM can achieved
[ 14] local 10.0.2.15 port 49439 connected with 130.92.x.x port 5001 [ 13] local 10.0.2.15 port 49438 connected with 130.92.x.x port 5001 [ 12] local 10.0.2.15 port 49431 connected with 130.92.x.x port 5001 [ 9] local 10.0.2.15 port 49433 connected with 130.92.x.x port 5001 [ 10] local 10.0.2.15 port 49434 connected with 130.92.x.x port 5001 [ 15] local 10.0.2.15 port 49432 connected with 130.92.x.x port 5001 [ 8] local 10.0.2.15 port 49430 connected with 130.92.x.x port 5001 [ 7] local 10.0.2.15 port 49436 connected with 130.92.x.x port 5001 [ 11] local 10.0.2.15 port 49437 connected with 130.92.x.x port 5001 [ 6] local 10.0.2.15 port 49435 connected with 130.92.x.x port 5001 [ ID] Interval Transfer Bandwidth [ 14] 0.0-10.9 sec 46.8 MBytes 36.1 Mbits/sec [ 14] MSS size 1460 bytes (MTU 1500 bytes, ethernet) [ 13] 0.0-10.8 sec 46.6 MBytes 36.2 Mbits/sec [ 13] MSS size 1460 bytes (MTU 1500 bytes, ethernet) [ 12] 0.0-10.8 sec 48.5 MBytes 37.6 Mbits/sec [ 12] MSS size 1460 bytes (MTU 1500 bytes, ethernet) [ 9] 0.0-10.8 sec 48.5 MBytes 37.6 Mbits/sec [ 9] MSS size 1460 bytes (MTU 1500 bytes, ethernet) [ 10] 0.0-10.8 sec 48.4 MBytes 37.4 Mbits/sec [ 10] MSS size 1460 bytes (MTU 1500 bytes, ethernet) [ 15] 0.0-10.8 sec 48.5 MBytes 37.6 Mbits/sec [ 15] MSS size 1460 bytes (MTU 1500 bytes, ethernet) [ 8] 0.0-10.8 sec 48.2 MBytes 37.4 Mbits/sec [ 8] MSS size 1460 bytes (MTU 1500 bytes, ethernet) [ 7] 0.0-10.8 sec 46.5 MBytes 36.0 Mbits/sec [ 7] MSS size 1460 bytes (MTU 1500 bytes, ethernet) [ 11] 0.0-10.8 sec 48.6 MBytes 37.6 Mbits/sec [ 11] MSS size 1460 bytes (MTU 1500 bytes, ethernet) [ 6] 0.0-10.8 sec 48.5 MBytes 37.6 Mbits/sec [ 6] MSS size 1460 bytes (MTU 1500 bytes, ethernet) [SUM] 0.0-10.9 sec 479 MBytes 370 Mbits/sec
So I suspect the Forti guys used the same API
@zez3 So, on my side, the results I have with openfortivpn vs official client on day basis are based on simple download speed on http server (not https), from a physical machine.
I will do an iperf3
test (I used iperf3
, but it was a bunch of month ago, I don't remember the exact results, but I remember it was a bit the same than downloads with my browser).
Your results are a bit confusing. Are you using the same iperf -c x.unibe.ch -P 10 -m
each time ? You are using TCP window size: 128 KByte (default)
each time too ? Why the last result is so different ?
[Edit] So, I'm not sure I will use 1.13.2, it seems there is something broken. It don't ask me for my password anymore, and so the gateway just reject the connection…
@zez3 So, on my side, the results I have with openfortivpn vs official client on day basis are based on simple download speed on http server (not https), from a physical machine.
That x.unibe.ch is our test server so I am testing from inside to inside via VPN I could do some http tests is needed.
Your results are a bit confusing. Are you using the same
iperf -c x.unibe.ch -P 10 -m
each time ? You are usingTCP window size: 128 KByte (default)
each time too ? Why the last result is so different ?
The last one is for reference only. It's without any VPN Client. Normal routing, inside to inside
[Edit] So, I'm not sure I will use 1.13.2, it seems there is something broken. It don't ask me for my password anymore, and so the gateway just reject the connection…
Yeah, I had the same issue, there seems to be also this other dns issue but it works afterwards https://github.com/adrienverge/openfortivpn/issues/534
my config(/usr/local/Cellar/openfortivpn/1.13.2/etc/openfortivpn/config) looks like this
host = univpn.unibe.ch port = 443 username = my_user set-dns = 0 pppd-use-peerdns = 1
and I connect with sudo openfortivpn -v
Okay, so I did a test with my company VPN, and still have the same kinds of results…
On a macOS 10.14.6 machine inside our company LAN, iperf3 is running as a server:
iperf3 -s
On a macOS 10.14.6 machine outside of our company LAN:
→ Connected to the company LAN via openfortivpn 1.13.2
$ iperf3 -c <company-machine-ip> -P 10
Connecting to host <company-machine-ip>, port 5201
...
[ ID] Interval Transfer Bandwidth
[ 7] 0.00-10.00 sec 974 KBytes 798 Kbits/sec sender
[ 7] 0.00-10.00 sec 853 KBytes 699 Kbits/sec receiver
[ 9] 0.00-10.00 sec 966 KBytes 791 Kbits/sec sender
[ 9] 0.00-10.00 sec 846 KBytes 693 Kbits/sec receiver
[ 11] 0.00-10.00 sec 978 KBytes 801 Kbits/sec sender
[ 11] 0.00-10.00 sec 857 KBytes 702 Kbits/sec receiver
[ 13] 0.00-10.00 sec 979 KBytes 802 Kbits/sec sender
[ 13] 0.00-10.00 sec 858 KBytes 703 Kbits/sec receiver
[ 15] 0.00-10.00 sec 1012 KBytes 829 Kbits/sec sender
[ 15] 0.00-10.00 sec 893 KBytes 731 Kbits/sec receiver
[ 17] 0.00-10.00 sec 963 KBytes 788 Kbits/sec sender
[ 17] 0.00-10.00 sec 842 KBytes 689 Kbits/sec receiver
[ 19] 0.00-10.00 sec 971 KBytes 796 Kbits/sec sender
[ 19] 0.00-10.00 sec 851 KBytes 697 Kbits/sec receiver
[ 21] 0.00-10.00 sec 990 KBytes 811 Kbits/sec sender
[ 21] 0.00-10.00 sec 870 KBytes 712 Kbits/sec receiver
[ 23] 0.00-10.00 sec 969 KBytes 794 Kbits/sec sender
[ 23] 0.00-10.00 sec 848 KBytes 695 Kbits/sec receiver
[ 25] 0.00-10.00 sec 987 KBytes 808 Kbits/sec sender
[ 25] 0.00-10.00 sec 866 KBytes 709 Kbits/sec receiver
[SUM] 0.00-10.00 sec 9.56 MBytes 8.02 Mbits/sec sender
[SUM] 0.00-10.00 sec 8.38 MBytes 7.03 Mbits/sec receiver
iperf Done.
$ ping <company-machine-ip>
PING <company-machine-ip> (<company-machine-ip>): 56 data bytes
64 bytes from <company-machine-ip>: icmp_seq=0 ttl=63 time=81.703 ms
64 bytes from <company-machine-ip>: icmp_seq=1 ttl=63 time=81.833 ms
64 bytes from <company-machine-ip>: icmp_seq=2 ttl=63 time=81.401 ms
64 bytes from <company-machine-ip>: icmp_seq=3 ttl=63 time=81.479 ms
...
→ Connected to the company LAN via FortiClient VPN 6.2.6.737
$ iperf3 -c <company-machine-ip> -P 10
Connecting to host <company-machine-ip>, port 5201
...
[ ID] Interval Transfer Bandwidth
[ 7] 0.00-10.00 sec 6.97 MBytes 5.85 Mbits/sec sender
[ 7] 0.00-10.00 sec 6.81 MBytes 5.72 Mbits/sec receiver
[ 9] 0.00-10.00 sec 8.78 MBytes 7.37 Mbits/sec sender
[ 9] 0.00-10.00 sec 8.63 MBytes 7.24 Mbits/sec receiver
[ 11] 0.00-10.00 sec 11.6 MBytes 9.71 Mbits/sec sender
[ 11] 0.00-10.00 sec 11.4 MBytes 9.57 Mbits/sec receiver
[ 13] 0.00-10.00 sec 11.8 MBytes 9.91 Mbits/sec sender
[ 13] 0.00-10.00 sec 11.6 MBytes 9.73 Mbits/sec receiver
[ 15] 0.00-10.00 sec 7.67 MBytes 6.43 Mbits/sec sender
[ 15] 0.00-10.00 sec 7.49 MBytes 6.29 Mbits/sec receiver
[ 17] 0.00-10.00 sec 9.44 MBytes 7.91 Mbits/sec sender
[ 17] 0.00-10.00 sec 9.26 MBytes 7.77 Mbits/sec receiver
[ 19] 0.00-10.00 sec 3.37 MBytes 2.82 Mbits/sec sender
[ 19] 0.00-10.00 sec 3.27 MBytes 2.74 Mbits/sec receiver
[ 21] 0.00-10.00 sec 4.18 MBytes 3.50 Mbits/sec sender
[ 21] 0.00-10.00 sec 4.02 MBytes 3.38 Mbits/sec receiver
[ 23] 0.00-10.00 sec 5.75 MBytes 4.82 Mbits/sec sender
[ 23] 0.00-10.00 sec 5.63 MBytes 4.72 Mbits/sec receiver
[ 25] 0.00-10.00 sec 6.10 MBytes 5.12 Mbits/sec sender
[ 25] 0.00-10.00 sec 5.97 MBytes 5.01 Mbits/sec receiver
[SUM] 0.00-10.00 sec 75.6 MBytes 63.5 Mbits/sec sender
[SUM] 0.00-10.00 sec 74.1 MBytes 62.2 Mbits/sec receiver
iperf Done.
$ ping <company-machine-ip>
PING <company-machine-ip> (<company-machine-ip>): 56 data bytes
64 bytes from <company-machine-ip>: icmp_seq=0 ttl=63 time=83.296 ms
64 bytes from <company-machine-ip>: icmp_seq=1 ttl=63 time=84.243 ms
64 bytes from <company-machine-ip>: icmp_seq=2 ttl=63 time=83.690 ms
...
I have the same kind of result by downloading something from a company inner HTTP server.
Something interesting is that connecting each iperf "session" from client to server take far more time when connected with openfortivpn than with FortiClient (it's like 2-3 seconds with openfortivpn, and immediate with FortiClient). As you can see, it's not a ping problem, so it's like the TCP handshaking is slowed down for some reason.
So well, there is probably a difference between your configuration and our (I say "our" as the problem appear with other people…) which probably explain this difference, but which one… 🤷♂️
[Edit]
-m
option on my iperf3 version, but it tell me, in verbose mode, that TCP MSS: 1302 (default)
Perhaps it's worth checking MTU sizes along the network link once again?
For example on my Linux machine without VPN the MTU of the Ethernet network interface is 1500:
$ ifconfig enp0s31f6
enp0s31f6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
[...]
$ ping -s $((1500-28)) -D 8.8.8.8 -c 1
PING 8.8.8.8 (8.8.8.8) 1472(1500) bytes of data.
[1585468557.655504] 76 bytes from 8.8.8.8: icmp_seq=1 ttl=55 (truncated)
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 35.282/35.282/35.282/0.000 ms
$
$ ping -s $((1500-28+1)) -D 8.8.8.8 -c 1
PING 8.8.8.8 (8.8.8.8) 1473(1501) bytes of data.
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
$
With openfortivpn connected to our test FortiGate appliance the MTU of the PPP network interface is 1354:
$ ifconfig ppp0
ppp0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1354
[...]
$
$ ping -s $((1354-28)) -D 8.8.8.8 -c 1
PING 8.8.8.8 (8.8.8.8) 1326(1354) bytes of data.
[1585468621.572430] 76 bytes from 8.8.8.8: icmp_seq=1 ttl=54 (truncated)
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 63.614/63.614/63.614/0.000 ms
$
$ ping -s $((1354-28+1)) -D 8.8.8.8 -c 1
PING 8.8.8.8 (8.8.8.8) 1327(1355) bytes of data.
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
$
With FortiClient connected to our test FortiGate appliance the MTU of the PPP network interface is 1354 too, no surprises here:
$ ifconfig ppp0
ppp0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1354
[...]
$
$ ping -s $((1354-28)) -D 8.8.8.8 -c 1
PING 8.8.8.8 (8.8.8.8) 1326(1354) bytes of data.
[1585469135.717319] 76 bytes from 8.8.8.8: icmp_seq=1 ttl=54 (truncated)
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 64.180/64.180/64.180/0.000 ms
$
$ ping -s $((1354-28+1)) -D 8.8.8.8 -c 1
PING 8.8.8.8 (8.8.8.8) 1327(1355) bytes of data.
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
$
Here's something interesting. Just tried with a different FortiGate appliance, the MTU is different between openfortivpn and FortiClient:
With openfortivpn the MTU is 1354:
$ ifconfig ppp0
ppp0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1354
[...]
$
$ ping -s $((1354-28)) -D 8.8.8.8 -c 1
PING 8.8.8.8 (8.8.8.8) 1326(1354) bytes of data.
[1585469715.528629] 76 bytes from 8.8.8.8: icmp_seq=1 ttl=50 (truncated)
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 40.099/40.099/40.099/0.000 ms
$
$ ping -s $((1354-28+1)) -D 8.8.8.8 -c 1
PING 8.8.8.8 (8.8.8.8) 1327(1355) bytes of data.
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
$
With FortiClient the MTU is 1500:
$ ifconfig ppp0
ppp0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500
[...]
$
$ ping -s $((1500-28)) -D 8.8.8.8 -c 1
PING 8.8.8.8 (8.8.8.8) 1472(1500) bytes of data.
[1585469821.801024] 76 bytes from 8.8.8.8: icmp_seq=1 ttl=50 (truncated)
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 41.845/41.845/41.845/0.000 ms
$
$ ping -s $((1500-28+1)) -D 8.8.8.8 -c 1
PING 8.8.8.8 (8.8.8.8) 1473(1501) bytes of data.
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
$
I checked, it's the same there :
ppp0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1354
inet <ip> --> 192.0.2.1 netmask 0xffff0000
Even if I know that MTU can have "tangible" effect, I don't think that 1354 → 1500 can really explain a ~ x8 bandwidth, right ?
I checked, it's the same there :
Same? 1354 with openfortivpn and 1500 with FortiClient?
Even if I know that MTU can have "tangible" effect, I don't think that 1354 → 1500 can really explain a ~ x8 bandwidth, right ?
No, of course not. But an incorrect MTU might perhaps cause slowness - I think. On the other hand an MTU of 1354 instead of 1500 does not look like a problem - the other way round might be.
My thoughts exactly. A big fragmentation would cause trouble on big(giga) data transfers and not on small files sizes like the iperf tests. The bigger the file transfer and duration the bigger the fragmentation would be felt. That mtu size suspicion I personaly would exclude it. In my tests I never saw x8 difference(FCT vs openforti). Anyway, It should be easy to make an tcpdump/wireshark count and see on transfers using the same file size if fragmentation occurs.
I found some interesting facts about VirtualBox with guest mac VM iMac11,3 Processor Speed: 3.1 GHz, 16GB, Boot ROM Version: VirtualBox, Apple ROM Info: vboxVer_6.1.4vboxRev_136177
the bridge mode on the vnic helps solve the low reference speed that I had and posted above when I was in NAT mode. Now in bridge mode without any VPN connected iperf reports almost line speed rate(1Gbps)
TCP window size: 129 KByte (default) [SUM] 0.0-10.1 sec 1.06 GBytes 906 Mbits/sec
but both of my vpn clients(openforti and FCT) then still report more or less the same around 40Mbps speed. [SUM] 0.0-10.2 sec 50.6 MBytes 41.7 Mbits/sec Even if I play around with the window size I still cannot reach what I have achieved on a real mac. For example:
$ system_profiler SPHardwareDataType Hardware:
Hardware Overview:
Model Name: iMac
Model Identifier: iMac14,2
Processor Name: Quad-Core Intel Core i5
Processor Speed: 3.2 GHz
Number of Processors: 1
Total Number of Cores: 4
L2 Cache (per Core): 256 KB
L3 Cache: 6 MB
Memory: 16 GB
Boot ROM Version: 141.0.0.0.0
SMC Version (system): 2.15f7
Serial Number (system): ...
Hardware UUID: ...
With openfortivpn connected:
$ iperf -c x.unibe.ch -P 10 -m Client connecting to x.unibe.ch, TCP port 5001 TCP window size: 128 KByte (default) ... [ ID] Interval Transfer Bandwidth [ 12] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) ... [SUM] 0.0-10.1 sec 383 MBytes 319 Mbits/sec
WIth FCT connected:
iperf -c x.unibe.ch -P 10 -m Client connecting to x.unibe.ch, TCP port 5001 TCP window size: 128 KByte (default) ... [ 7] local 172.30.160.11 port 54618 connected with 130.92.x.x port 5001 ... [ ID] Interval Transfer Bandwidth [ 8] MSS size 1302 bytes (MTU 1342 bytes, unknown interface) [SUM] 0.0-10.1 sec 447 MBytes 373 Mbits/sec
Other VMs(Linux Ubuntu 16.04 with the old openforti version is slower, Kali, Windows) on the same Vbox host reach more or less the same
So I suspect that the CPU instructions needed by both VPN Clients for acceleration(e.g. AES-NI, AVX) are not passed on to the mac guest VM. On other user cases probably the FCT can take advantage of some HW CPU acceleration(still not sure what FCT uses) and will use it but it could happen that this openssl client will not or could not, so I suspect from there it could come the big difference(even x8) in speed. Anyway in my case the macos virtualization it's not officially supported nor allowed by Apple outside of their own HW.
At least, I must also state that our production boxes are one of the bigger models 3960E and perhaps like @mrbaseman said:
So, well, the only option that we have seen is throwing more compute power at the problem, and we have tested a newer model, and a much larger one. Either a similar sized model of a newer series, or within a series a larger model with more cores and perhaps additional ASICS, both can help to address the problem of the limited speed.
it helps to have bigger boxes. Over-provision is our DDOS solution anyway.
Does anyone know that FCT needs for VPN acceleration ?
https://docs.fortinet.com/document/forticlient/6.2.0/administration-guide/646779/installation-requirements computer with Intel processor or equivalent :( that is kind of vague
@zez3
So I suspect that the CPU instructions needed by both VPN Clients for acceleration(e.g. AES-NI, AVX) are not passed on to the mac guest VM. On other user cases probably the FCT can take advantage of some HW CPU acceleration(still not sure what FCT uses) and will use it but it could happen that this openssl client will not or could not, so I suspect from there it could come the big difference(even x8) in speed.
And it's why I tested to replace OpenSSL with macOS Security Framework and macOS Network Framework. And the results are pretty much the same than OpenSSL.
If it was not clear : those frameworks are provided by Apple, they are integrated in macOS. I didn't check, but I'm 99.9% sure that if there is a way to hardware accelerate anything used by TLS, those frameworks use it.
Another possibility, already pointed by @mrbaseman, would be that the official client choose low security ciphers which are x8 quicker than the default used by OpenSSL / macOS APIs, even considering hardware optimization. But I have strong doubts about this.
When I download files from a server behind the VPN using openfortivpn my download speeds are capped around 2.5mb/sec. When I perform the same download behind the official Fortinet client, I can maximize my connection speed (around 10mb/sec).
Any ideas what might be happening there? The problem is reproducible on different MacOS laptops when downloading the same files.