Open hauntingEcho opened 3 years ago
for some clarification - codepipeline artifacts lose their access timestamps, which stops dotnet package
's normal re-packaging detection/prevention from working
We have noticed this issue has not received attention in 1 year. We will close this issue for now. If you think this is in error, please feel free to comment and reopen the issue.
The ability to re-use the same artifact across dev and prod environments, rather than having to rebuild them, is still needed
This needs to be reviewed with the team.
This can be worked around by using aws cloudformation package
instead of dotnet lambda package-ci
:
AWS::Serverless::Function
objects either using CodeUri
directly or via the Globals
. Set that CodeUri
to a local zip pathdotnet lambda package
to build your zip file to that same local zip pathaws cloudformation package
to handle the file uploadsaws cloudformation deploy
to handle the actual deployment (or Stacker or a CodePipeline CloudFormation action or whatever)Note that I think this will probably prevent you from using the Mock Lambda Test Tool
Unfortunately adding the --package
switch doesn't work because there can be multiple packages needed if the cloudformation template is referring to multiple .NET projects. In order to to solve this we would need to pass in a map of zip files to lambda functions defined in the template.
The work around @hauntingEcho suggested about having the CodeUri point to a local zip file will also work with the dotnet lambda package-ci
command.
Describe the Feature
Currently, the only way to upload local code packages to S3 & transform the template is via
package-ci
. However, this may rebuild the artifact which may not be desired. Being able to upload local packages to S3 without rebuilding would add reliability to the build process.An example workflow targeted by this change:
for x in */serverless.template; do (cd $(dirname ${x}) && dotnet lambda package --output-package out.zip); done;
for x in */serverless.template; do (cd $(dirname ${x}) && dotnet lambda package-ci); done; stacker build environments/dev.yml stacker.yml
for x in */serverless.template; do (cd $(dirname ${x}) && dotnet lambda package-ci); done; stacker build environments/prod.yml stacker.yml
Is your Feature Request related to a problem?
When trying to deploy the same artifacts across multiple accounts, via a tool which manages dependencies between stacks (like stacker), code is likely to be recompiled rather than re-using the artifact. This can cause an issue if your deployment environment doesn't match your build environment, including e.g. access to private NuGet repositories
Proposed Solution
Add
--package
parameter topackage-ci
, matching its behavior indeploy-serverless
anddeploy-function
Describe alternatives you've considered
Making the S3 bucket readable by downstream accounts - this would allow a lower-security development environment to overwrite packages in use by the higher-security prod environment, rather than allowing prod to handle its own deployment/validation.
Another alternative would be to provide a separate action, which just does the template transformation but requires a pre-existing package.
Environment
AWS CodeBuild, running in AWS CodePipeline
It looks like all that's needed is to add the
Package
option to PackageCICommand's DefaultLocationOption config to match the implementation in DeployServerlessCommandThis is a :rocket: Feature Request