hjmangalam / parsyncfp

follow-on to parsync (parallel rsync) with better startup perf
Other
164 stars 19 forks source link

how to use ssh weak ciphers to less cpu usage like arcfour #27

Closed ezako2 closed 5 years ago

ezako2 commented 5 years ago

how can I use ciphers like arcfour in ssh connection

as I already use this command , and I want to migrate it with parsyncfp

rsync -aHAXxv --numeric-ids --progress -e "ssh -T -c arcfour -o Compression=no -x" /LOCAL_DIR/ user@xx.xx.xxx.xx:/REMOTE_DIR/

as arcfour is weak ciphers also disable Compression and can do less CPU usage

REF: https://galaxysd.github.io/20160302/Fastest-Way-Rsync

hjmangalam commented 5 years ago

On Tuesday, October 1, 2019 11:16:40 AM PDT ezako2 wrote:

how can I use ciphers like arcfour in ssh connection

AFAIK, arcfour isn't supported in ssh2 which is what most modern linuxes use. You're welcome to try various other low-overhead ciphers, but in reality, the ciphers don't add much to the overhead. Compression adds a lot more, but it can be very useful over low- bandwidth connections. It depends on what kind of files you're sending, how many, the hopcount, the latency, and what the overhead is for various things. If you're sending a lot of tiny, highly compressible files over a long distance, it might be more useful to compress the whole thing into a tarchive.gz and send that in a single stream. parsyncfp s good at

If you don't want to compress the rsync transfer, that's the default. (no -z option to rsync). You don't have to feed another option to ssh.

to do what you want with parsyncfp, try the following:

parsyncfp --interface=[your_interface] --NP=[how_many_rsyncs] \ -rsyncopts='-e "ssh -T -c [your_cipher_choice] -x"' --startdir=/ \ LOCAL_DIR user@xx.xx.xxx.xx:/REMOTE_DIR

What you want to do will probably require a lot of futzing around to match ciphers on both ends or setting up your ssh-config appropriately. Unless you have a very oddball transfer (in which case please share it), I would not worry too much about the cipher and would worry about the number and size and distribution of the files you're transferring.

Use qdirstat to get a visualization of the tree. LIke this:

https://github.com/shundhammer/qdirstat/issues/73

Ideally, you want to get a small number (about 2-4 times the number of streams) of highly compressed or otherwise non-redundant parallel streams running simultaneously.

Let me know if this answers your question. Best Harry

as I already use this command , and I want to migrate it with parsyncfp

rsync -aHAXxv --numeric-ids --progress -e "ssh -T -c arcfour -o Compression=no -x" /LOCAL_DIR/ user@xx.xx.xxx.xx:/REMOTE_DIR/

as arcfour is weak ciphers also disable Compression and can do less CPU usage

REF: https://galaxysd.github.io/20160302/Fastest-Way-Rsync

Harry Mangalam, Info[1]


[1] http://moo.nac.uci.edu/~hjm/hjm.sig.html

ezako2 commented 5 years ago

Thanks for reply, To know king of file I have, it's about 77Tb, around 66K file, most of the are large file and already compressed also hard disk has problem and been very slowly (1 rsync ~1Mb/s)

That's why I have choose your project, as before I was doing manually simultaneously of rsync, but yours safe lot of time and work

rsyncopts=, I will testing it and report you

I have more questions please: 1- if I suspended parsyncfp via control-z or control-c, are all current rsync exit like when I exit normal rsync (removed temp file in remote host .file.aefhtd)?

2- can I run multi parsyncfp at the same time, or should I wait untill current finished?

3- I set NP=50, BUT it's make it 30, why?

4- when testing on small directory, it's run in background without summary (seam to be exit, but rsync working in background), why?

Thanks 😊

hjmangalam commented 5 years ago

On Tuesday, October 1, 2019 3:12:11 PM PDT ezako2 wrote:

Thanks for reply, To know king of file I have, it's about 77Tb, around 66K file, most of the are large file and already compressed also hard disk has problem and been very slowly (1 rsync ~1Mb/s)

How far apart (ping-wise, in ms) are the endpoints? Is the 2 servers in the same lab or across the country from each other?

Note that to do the full recursion on 77TB, it will take some time, and it may also take some time for fpart to get ahead of the rsyncs if the data has been partially rsync'ed already. This is a known 'bug' and I'm looking into how to make the fpart recursion faster.

That's why I have choose your project, as before I was doing manually simultaneously of rsync, but yours safe lot of time and work

rsyncopts=, I will testing it and report you

I have more questions please: 1- if I suspended parsyncfp via control-z or control-c, are all current rsync exit like when I exit normal rsync (removed temp file in remote host .file.aefhtd)?

If you suspend (^z) parsyncfp (pfp), all the dependent rsyncs are also suspended and will UNsuspend when you UNsuspend the primary process. If you KILL pfp, all the subsidiary rsyncs are killed. However, the fpart is forked, so it's independent and so you will have to explicitly kill the fparts separately.

2- can I run multi parsyncfp at the same time, or should I wait untill current finished?

You can run multiple pfps, but you will have to assign separate altcaches:

--altcache|ac (~/.parsyncfp) ..... alternative cache dir for placing it on a another FS or for running multiple parsyncfps simultaneously

Be careful, since the fparts are separate and you might get confused as to which fparts are associated with which pfps

3- I set NP=50, BUT it's make it 30, why?

50 is much too high (and should be mentioned in the docs). Setting NP higher than ~16 puts too much load on the filesystem unless it's a high perf parallel FS and just confuses things. It can also overwhelm the network, depending on what kind of network you have and what the bottlenecks are. NP=8 is a good starting point, especially with large, already compressed files.

4- when testing on small directory, it's run in background without summary (seam to be exit, but rsync working in background), why?

This may be a bug that I just discovered - if the number of NP is higher than the number of chunk files created, it will appear to finish, but will still be running in the background.
So until I add this error checking and bug fix, don't set the NP # higher than the number of chunk files. You typically wouldn't do this anyway since then there won't be anything for the multiple rsyncs to do. But I just noticed this with small numbers of chunk files and large numbers of NP. Fix is coming.

Thanks 😊

ezako2 commented 5 years ago

How far apart (ping-wise, in ms) are the endpoints? Is the 2 servers in the same lab or across the country from each other?

I found hardware problem in cable in network card, the datacenter has changed them and speeds come back ~100MB/s

Note that to do the full recursion on 77TB, it will take some time, and it may also take some time for fpart to get ahead of the rsyncs if the data has been partially rsync'ed already. This is a known 'bug' and I'm looking into how to make the fpart recursion faster.

No, I didn't use it all in one task, I used every 1TB Separately, not more

If you suspend (^z) parsyncfp (pfp), all the dependent rsyncs are also suspended and will UNsuspend when you UNsuspend the primary process. If you KILL pfp, all the subsidiary rsyncs are killed. However, the fpart is forked, so it's independent and so you will have to explicitly kill the fparts separately.

tested and working like what I needed ^C kill all rsync task complete

You can run multiple pfps, but you will have to assign separate altcaches: tested and working like a charm

50 is much too high (and should be mentioned in the docs). Setting NP higher than ~16 puts too much load on the filesystem unless it's a high perf parallel FS and just confuses things. It can also overwhelm the network, depending on what kind of network you have and what the bottlenecks are. NP=8 is a good starting point, especially with large, already compressed files.

before change corrupted hardware, I don't care about load (it was going to be 50), because already server is down and the only rsync is working, and I got speed at all (~30MB/s) after change hardware, everything been nice now, same NP=30 * 3, and load not over than 2

This may be a bug that I just discovered - if the number of NP is higher than the number of chunk files created, it will appear to finish, but will still be running in the background. So until I add this error checking and bug fix, don't set the NP # higher than the number of chunk files. You typically wouldn't do this anyway since then there won't be anything for the multiple rsyncs to do. But I just noticed this with small numbers of chunk files and large numbers of NP. Fix is coming.

confirmed when decreasing size of the chunk, it's working in a small directory

last notice I found, when running multiple pfp, with change cache name the summary come all with same transfer rate, in my condition I used 3 pfp NP=30 for every each and all summary display TCP Out = 117MB/s

I don't know, is that max speed that I can got at all, or what thanks again for your support and your reply

hjmangalam commented 5 years ago

On Wednesday, October 2, 2019 12:48:04 PM PDT ezako2 wrote:

last notice I found, when running multiple pfp, with change cache name the summary come all with same transfer rate, in my condition I used 3 pfp NP=30 for every each and all summary display TCP Out = 117MB/s

I don't know, is that max speed that I can got at all, or what thanks again for your support and your reply

The value that's given in the scrolling output is the total bandwidth out on that interface.
so 117MB/s is VERY good bandwidth for a 1GbE interface. that's essentially the theoretical max.

Glad things worked out for you. Still fixing the bug I mentioned with low chunk numbers and high NP.

hjm

Harry Mangalam, Info[1]


[1] http://moo.nac.uci.edu/~hjm/hjm.sig.html