Open jamesls opened 9 years ago
This was reported half a year ago - can you tell us what the delay is in fixing this?
Any news on this issue? Thanks!
Perhaps it is time for someone to write a better tool.
I'm also interested in a fix to this issue. Any progress?
Also facing this when trying to backup a docker volume. It contains a couple of sockets and this produces warning: Skipping file /path/to/abcd.socket. File is character special device, block special device, FIFO, or socket.
Trying all sorts of values for --exclude
did not help, even if I --exclude "*"
.
Because s3 sync
always returns a non-zero exit code, there is no way to use the command in the automation tools like ansible unless executing something insane like aws s3 sync [...args] || true
.
I am experiencing this as well - warnings for explicitly excluded files.
This is problematic because s3 sync
is exiting with a non-zero exit code.
When automating s3 sync
like this: Simple S3 Resource for Concourse CI
We exclude everything and only include the directories we want to sync. The result is that the expected files are uploaded but because of the non-zero exit code - the Concourse CI job using the above resource fails.
s3 cp
seems to suffer from the same issue:
aws s3 cp --recursive --exclude '*' --include 'binlog.*' /var/lib/mysql/ s3://demo/bucket/
warning: Skipping file /var/lib/mysql/mysql.sock. File is character special device, block special device, FIFO, or socket.
upload: ../var/lib/mysql/binlog.000010 to s3://demo/bucket/binlog.000010
So, does this need additional labels?
As a workaround, in some cases you can probably get away with first copying stuff (without the problem 'files') to /tmp
and then running your sync from there.
A file that would is
--exclude
'd should not trigger a non zero RC if we can't read the file. For example:Given that
/tmp/demo/bad
was never going to be transferred in the first place (because it's been excluded via--exclude '*'
, it seems odd that this will trigger a non zero RC. I'd expect the RC to be 0 here.