Closed perelesnyk closed 5 years ago
Yes, it does. And it is more efficient, since it is an improvement, not a workaround
Good point. Need to use a unique name. Will update in a few days
Or perhaps, we could place the dump into files directory, instead
We have a configuration object populated from the config.yml
file - what if you add a setting there and let the end user name a path on the remote? Or could we add an key|value to the drush alias by environment and use that?
Nice idea. I was planning to mimic the drush sql-sync in the part that gives the dump (btw, placed into tmp, too) a name composed of site name and timestamp.
This issue may be going away:
--result-file
is not going to work well if@live
is a remote alias, as the sql dump file will be written to the remote system. You could retrieve the dump manually via rsync, like sql:sync does, but this is inconvenient. The problem with>
, though, is that this command runs via an ssh command, and ssh will have a tendency to emit progress messages such as "connection closed" that end up inside the dump file, corrupting it. The solution to this problem is to use themaster
branch of Drush, or the latest beta release of Drush 9.6, where this bug has been fixed.
Well, unless I am mistaken, using > will not be a gzipped stream, which will render longer times. What exactly is inconvenient with rsync?
Nothing inconvenient. But if drush is updating their code, it's worth a look if they are adding gzip too - if we don't have to maintain it then all the better, that's all.
Right, agreed. Gonna have a look when I get a chance
These changes are incorporated in PR #52
@FatherShawn unfortunately, robo Exctract command does not handle .gz files properly. So I had to use gunzip here. Another option is to actually extend robo command set