Open jpb opened 5 years ago
It turns out that aws cloudformation package
relies on having the local/S3 location hard coded into the template. I would much rather this work with the value as a parameter to avoid changing the template.
I've been playing around with the idea of introducing $uploads
(to complement $imports
and $defs
) – something along the lines of:
$uploads:
zip: my-file.zip
Parameters:
Code: !$ zip
where my-file.zip
would be uploaded to S3 and zip
would hold the URL of the object.
I'm struggling with how the local and remote locations would be specified:
$uploads:
zip:
local: my-zip.zip
remote: s3://bucket/path
zip:
file: my-zip.zip
s3: s3://bucket/path
zip:
- my-zip.zip
- s3://bucket/path
zip: [my-zip.zip, s3://bucket/path]
The S3 path would need to change when the underlying file changes for CloudFormation to handle updates properly. Should $uploads
:
What are your thoughts @tavisrudd @tuff?
Re: the API, I like a combo of your suggestions:
$uploads:
zip:
local: my-zip.zip
s3: s3://bucket/path
Should
$uploads
rely on the user to change the path?
Meaning that if you're working with a lambda and you make a code change and upload a new bundle, you also have to update your stack args? That doesn't seem right 😕
I like the object versioning option, unless implementing that is ugly for reasons I can't see now.
Meaning that if you're working with a lambda and you make a code change and upload a new bundle, you also have to update your stack args? That doesn't seem right 😕
I imagine it would work something like:
$imports:
version: env:VERSION
$uploads:
zip:
local: my-zip.zip
s3: "s3://bucket/{{ version }}/my-zip.zip"
I like the object versioning option, unless implementing that is ugly for reasons I can't see now.
I haven't really thought this one through very much, but I think it would result in causing CloudFormation to update the resource, even if the file hasn't changed. iidy would upload the file to S3, which would create a new version, which would result in the URL changing (even if the file contents haven't changed).
The content hashing idea probably has issues for Lambda deployments - I doubt that npm build
(or it's equivalent) would producing the exact same artifact for the same inputs, and you wouldn't want it to create a new Lambda version in that case.
I think I'm leaning towards the version
example above because it is more obvious what is going on and puts the control in the hands of the developer.
(I'm also thinking the full object path should exist in the s3
property, including the "filename")
I think the explicit proposal with $imports: version ...
is compatible with content hashing. If you want to use a content hash you can always import the filehash:
and use it for version. This would also support regional s3 buckets (cfn requires the bucket to be in the same region).
As an alternative, the workaround from #161 could be used instead of $uploads
for a Lambda function's code with something like:
$imports:
version: env:VERSION
$defs:
s3Location: s3://some-bucket/{{ version }}/app.zip
Parameters:
S3Location: !$ s3Location
CommandsBefore:
- 'aws s3 cp app.zip {{ s3Location }}'
Add the equivalent to running
aws cloudformation package
(perhapsiidy create-stack --package <S3 location>
and/orArtifactLocation: s3://bucket/base/path/
stack args property) to upload resources to S3 and replace their property in the template with S3 location.https://docs.aws.amazon.com/cli/latest/reference/cloudformation/package.html
This pattern could be extended to support workflows like #161