Closed bpgould closed 1 year ago
Unfortunately, this is impossible for S3 objects because there is no such argument as on Lambda Layers (skip_destroy
).
To achieve what you want, you will have to manage S3 objects outside of this Lambda module and pass path to the object as an argument to this module. Read more - https://github.com/terraform-aws-modules/terraform-aws-lambda#lambda-function-with-existing-package-prebuilt-stored-in-s3-bucket
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
It is very helpful, and often required for security/compliance to keep deployment packages of applications. For this reason as well as other advantages, the resource for Lambda Layers has implemented skip_destroy so that when a new layer is created, the current one is only removed from state.
This functionality would be very nice for the Lambda Module, but for deployment packages i.e.
artifiact_skip_destroy = true
I am using module version 4.13.0.
as a code snippet, I am using the module like this:
This is nice because when I go to s3 console I have a file tree like this:
But I would like it to look like this if
artifact_skip_destroy = true
However, this is not currently possible since the module replaces the artifacts. I do have bucket versioning turned on, but I also do not see the objects versioned because the provider/resource is deleting version of the object.