Closed alex-vmw closed 11 months ago
Server-side variant is already usable without any changes, you just need to extend the sshd image to configure it the way you want and use that image instead, see here.
You can give it a try to see if it solves the problem. If it does, we can also add the config as default here. (Any chance they cause problems in some other scenarios?)
For the client-side solution, making something more generic might be a good idea - we could add a flag like --ssh-extra-args
, kinda similarly to the Helm chart value rsync.extraArgs
.
Hi @utkuozdemir,
Ok, let's go with the server side option then 🙂 I'd very much appreciate a PR 🙂
Closed by https://github.com/utkuozdemir/pv-migrate/commit/4051318002c2825acd9a148b34eb339f33c9a9c8
Thanks for the contribution 🙂.
Is your feature request related to a problem? Please describe. I was testing a migration of 500Gi PVC today with ~23 million files/dirs in it and noticed an issue. Because
pv-migrate
is using--info=flist0
command line parameter with rsync the destination side is completely idle while source side is preparing the file list. With ~23 million files it took ~2 hours to compile the file list on the source side, but unfortunately ssh connection would always timeout around 40 minutes and rsync process would fail. I think if ssh keepalive was implemented either on the server side or the client side, it would continue to keep the idle connection open as long as both server and client are still alive and responding to keepalive packets.Describe the solution you'd like I think we have two choices below for implementing ssh keepalive, so what do you like best? I think I personally prefer option 1.
Server Side. We can add the below configuration to the
/etc/ssh/sshd_config
of thepv-migrate-sshd
image:Client Side. We can add 2 more args to the sshArgs like below:
Describe alternatives you've considered None
Additional context None