Closed nhinds closed 8 years ago
@nhinds this is intentional. ship's job is just to take local files and ship them to a remote destination (like rsync for cloud services). ship doesn't attempt to understand anything about the state of your local files, beyond what is configured in your ship.conf
like rsync for cloud services
To be fair, rsync
has an option to delete remote files that do not exist locally.
Additionally, most or all of the other deployers do handle deleted/renamed files. I have just tested these deployers and verified that local files that have been deleted are removed from the remote end:
gh-pages
deployer deletes files that are deleted, and tracks files that have moved as renames in gitbitballoon
deployer deletes files that are deleted (as long as I manually craft a ship.conf
, as it doesn't seem to prompt - but that's a different issue)netlify
deployer deletes files that are deletedI have not checked the heroku
deployer because heroku seems more complicated to setup.
I would argue that the s3
deployer should be consistent with the other supported deployers and ensure that the remote side looks exactly like the local side.
If you want to keep this inconsistency - because the other deployers do seem to support a "bulk replace" operation that S3 does not - then can it please be called out in the documentation of the S3 deployer? I wasted a lot of time on this before discovering ship
doesn't synchronize everything to S3, and I'd like to save other people the same pain.
we'd be happy to accept any PRs that either added these features, or documented the current features better.
as for rsync, sorry if that parenthetical wasn't clear. there are lots of features rsync has that we don't have -- I was just trying to be illustrative.
If a file is removed or renamed locally, when
ship
is run with thes3
deployer it does not remove the original file from S3.e.g.
The result of the above is both
index.htm
andindex.html
exist in the S3 bucket.