Closed mdavis-xyz closed 10 months ago
@mdavis-xyz looks like you're more familar with ssm connections.
Are you willing to provide a PR that improves the documentation?
I need confirmation of the answers. e.g. I'm not sure what permissions are required for the controller. especially for presigned URLs.
Also, what's the right way to put hyperlinks in the docs? e.g. keep_remote_files=True? I can put in that absolute hyperlink, but I assume we'd want to keep the links within each particular version of the docs?
Q: Why a bucket is required, even if you're not running any copy commands. (One sentence explanation is probably fine.):
A: Ansible is to designed to not require anything (except Python) to be installed on the target. For each Ansible module, Ansible copies a python script to the target, and then executes it. This is true for all modules, not just the file copying ones like copy
. It is possible to send files directly over SSM, however that is slow compared to using S3. That is, the controller uploads files to S3, and then sends a shell command to the target, telling it to download the file from S3.
Q: Which IAM permissions are required on the target (e.g. s3:GetObject, or s3:GetObjectVersion, etc, or also ListBucket?)
A: No S3 IAM permissions are required on the target. To simplify IAM permissions and reduce dependency requirements, the controller generates a pre-signed URL for each file, and then tells the target to run curl https://...
.
Q: which IAM permissions are required on the controller (s3:PutObject, s3:DeleteObject. Anything else? e.g. presigned URLs?) A: I'm not sure
Q: which prefix within S3 the objects are saved to
A: The file /path/to/something.txt
For EC2 instance i-123
will be saved at s3://bucket/i-123//path/to/something.txt
.
Q: whether the files in S3 are deleted when done. A: yes it does
A: whether the files in S3 are deleted if the general Ansible setting keep_remote_files=True. Q: That setting is ignored for files in S3.
One other reason for S3 is that if you send files directly over SSM (e.g. echo blah | base64 -d > file
), the contents will be visible persistently in .bash_history, and perhaps in SSM execution history. For some files that might be a security risk. (Just a guess)
I was struggling with the S3 permissions as well due to missing documentary. Finally I found out that the Ansible host as the target need these actions allowed: s3:GetObject
, s3:PutObject
, s3:ListBucket
, s3:DeleteObject
and s3:GetBucketLocation
. I did this using a bucket policy.
A short documentation would be helpful, thank you!
Edit: Update required actions
Summary
The SSM connector docs don't mention S3 up the top.
They only mention it in the details of the arguments, which is a bit unclear for someone completely new to this.
In the "Requirements" section, it should say
copy
commands. (One sentence explanation is probably fine.)s3:GetObject
, ors3:GetObjectVersion
, etc, or also ListBucket?)keep_remote_files=True
.Issue Type
Documentation Report
Component Name
community.aws.aws_ssm connection
Ansible Version
Collection Versions
Configuration
OS / Environment
Mac OS
Additional Information
No response
Code of Conduct