Closed townsend2010 closed 6 years ago
I get comparable performance using an openSSH client once we match the cipher and mac used by our internal libssh based client, that is ssh -c aes256-ctr and -m hmac-sha2-256
. I get:
~31MB/s writes ~42MB/s reads
For comparison, our libssh based client results in: ~35 MB/s writes ~45 MB/s reads
However, openSSH has support for chacha20-poly1305 which libssh lacks. When openSSH negotiates to use that cipher/mac, performance is much better: ~61MB/s writes ~122MB/s reads
There was a recent patchset sent to the libssh mainling list to add chacha20-poly1305 support. However it seems libssh's maintainer already had a branch 3 years ago to add support as well that hasn't been merged.
Applying that recent patchset (since it's based on a more recent version of libssh's master branch), I get the following results: ~79MB/s writes ~144MB/s reads
Ohhhh, I say we add chacha20-poly1305 support to our fork and then pester the maintainers on the list to merge.
Did you do anything special to use chacha20-poly1305, like have to set options for our ssh session or pass different options to sshfs?
When reading and writing to/from a mounted directory, the performance is just plain awful.
On a fully optimized multipass build, using iozone for sequential read/write tests on a 2GB file, we get:
$ iozone -i 0 -r 64k -s 2G -w -f iozone.tmp
~ 29MB/s for writes$ iozone -i 1 -r 64k -s 2G -f iozone.tmp
~ 34MB/s for readsBased on before and after testing, the recent forced flush() on writes only had a minimal impact of ~2MB/s less in performance.
Also, I ran callgrind on multipassd when doing file operations on a mount and observed almost all of the time is spent in sha256_block_data_order() and AES_encrypt().