Open boltronics opened 3 years ago
Hi @boltronics,
Thanks for bringing this to our attention. I'll review with the team to get their thoughts on this.
Hi @boltronics,
Thanks for your patience. I was looking through our docs and I probably just missed it, but can you provide a link to where it specifically says FIFO queues are skipped? I was able to find that we do support downloading to FIFOs, so that at least explains some of the behavior. This is something I'd like to have documented better, if possible.
In terms of expectations, I understand the point you're making based on the examples above, but the only documented (and guaranteed to work) examples are the ones that reference the addition of file streaming capability in the s3 cp
reference guide.
Hi @stobrien89,
No problem, certainly not urgent since there is a work-around.
I was trying to look up exit codes to handle in my script:
$ aws s3 cp <(echo "Test") s3://my-bucket/test.txt
warning: Skipping file /dev/fd/63. File is character special device, block special device, FIFO, or socket.
$ echo $?
2
$
That led me to this page: https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-returncodes.html
It is there that it says:
Files that are skipped during the transfer process include: files that don't exist; files that are character special devices, block special device, FIFO queues, or sockets; and files that the user doesn't have read permissions to.
I don't normally have to reference the docs at all to use awscli
as the built-in help
command is generally enough. Unfortunately when I couldn't initially get this working, it took a bit of digging.
The aws s3 cp <(echo "Test") s3://my-bucket/test.txt
approach is the first one I tried, and to me is the most obvious syntax to use to achieve what I wanted. I've done a lot of shell scripting in my career. In any case, thanks for looking into it.
Hi @boltronics,
Thanks for the additional info! That definitely clears things up.
Like @kdaily mentioned in #6160, having a stricter compatibility mode with Unix commands/processes would be useful, but we do have a large number of windows users, so that's something else we have to take into consideration. I'll mark this as a feature request for now, but I can't make any guarantees as to when/if something like this would be implemented.
Any news on it?
This is possible:
This is possible:
This is not possible:
and yet, this is possible:
This is inconsistent behaviour and thus not aligned with user expectations. The docs say FIFO queues are skipped, but that's clearly not always the case since
-
works. Strangely, even using--include '*'
doesn't help matters.SDK version number aws-cli/1.19.1 Python/3.7.3 Linux/5.10.0-0.bpo.5-amd64 botocore/1.20.0
Platform/OS/Hardware/Device Debian GNU/Linux 10, x86_64
To Reproduce (observed behavior)
$ aws s3 cp <(echo "Test") s3://my-bucket/test.txt
Expected behavior I expect
s3://my-bucket/test.txt
should exist as an S3 object with the text "Test".