terraform-aws-modules / terraform-aws-lambda

Terraform module, which takes care of a lot of AWS Lambda/serverless tasks (build dependencies, packages, updates, deployments) in countless combinations 🇺🇦
https://registry.terraform.io/modules/terraform-aws-modules/lambda/aws
Apache License 2.0
886 stars 656 forks source link

Feature Request: skip_destroy on aws_s3_object #445

Closed bpgould closed 1 year ago

bpgould commented 1 year ago

It is very helpful, and often required for security/compliance to keep deployment packages of applications. For this reason as well as other advantages, the resource for Lambda Layers has implemented skip_destroy so that when a new layer is created, the current one is only removed from state.

This functionality would be very nice for the Lambda Module, but for deployment packages i.e. artifiact_skip_destroy = true

I am using module version 4.13.0.

as a code snippet, I am using the module like this:

locals {
  lambda-name = "s3-lambda-test"
}

module "lambda_function" {
  source  = "terraform-aws-modules/lambda/aws"
  version = "4.13.0"

  function_name = "${local.lambda-name}"
  description   = "My awesome lambda function"
  handler       = "main.lambda_handler"
  runtime       = "python3.9"
  publish       = true

  source_path = "../lambdas/s3-lambda-test"

  store_on_s3   = true
  s3_bucket     = aws_s3_bucket.uar-lambda-artifacts.id
  artifacts_dir = "builds/${local.lambda-name}/"

  tags = var.tags
}

This is nice because when I go to s3 console I have a file tree like this:

builds/
    - lambda-name-1/
       - some-long-hash-current
    - lambda-name-2/
        - some-long-hash-current

But I would like it to look like this if artifact_skip_destroy = true

builds/
    - lambda-name-1/
       - some-long-hash-current
       - some-long-hash-second-newest
    - lambda-name-2/
        - some-long-hash-current
        - some-long-hash-second-newest
        - some-long-hash-third-newest

However, this is not currently possible since the module replaces the artifacts. I do have bucket versioning turned on, but I also do not see the objects versioned because the provider/resource is deleting version of the object.

antonbabenko commented 1 year ago

Unfortunately, this is impossible for S3 objects because there is no such argument as on Lambda Layers (skip_destroy).

To achieve what you want, you will have to manage S3 objects outside of this Lambda module and pass path to the object as an argument to this module. Read more - https://github.com/terraform-aws-modules/terraform-aws-lambda#lambda-function-with-existing-package-prebuilt-stored-in-s3-bucket

github-actions[bot] commented 1 year ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.