terraform-aws-modules / terraform-aws-lambda

Terraform module, which takes care of a lot of AWS Lambda/serverless tasks (build dependencies, packages, updates, deployments) in countless combinations 🇺🇦
https://registry.terraform.io/modules/terraform-aws-modules/lambda/aws
Apache License 2.0
886 stars 657 forks source link

s3_existing_package not working as expected #400

Closed ffleandro closed 1 year ago

ffleandro commented 1 year ago

Description

local_existing_package updates the lambda correctly, but s3_existing_package doesn't.

In my terraform scripts I manually upload the lambda code into an s3 bucket using:

module "s3_bucket" {
  source  = "terraform-aws-modules/s3-bucket/aws"
  version = ">= 3.6.0"

  bucket        = "my-bucket-name"
  acl           = "private"
  force_destroy = true

  # S3 bucket-level Public Access Block configuration
  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

resource "aws_s3_object" "app_dist_zip" {
  depends_on = [module.s3_bucket]
  bucket = module.s3_bucket.s3_bucket_id
  key    = local.app_package_name
  acl    = "private"
  source = "../device-service/dist/${local.app_package_name}"
  etag = filemd5("../device-service/dist/${local.app_package_name}")
  source_hash = filemd5("../device-service/dist/${local.app_package_name}")
}

I've confirmed that the s3 object is properly updated every time the md5 of the zip file changes.

However, the lambda never updates when this object changes, even if I run a few times sequentially. Initially I thought the lambda was using the source_hash of the file that exists on S3 before the upload so a second attempt would fix this since now the s3 object is already online.

This is my lambda example:

module "lambda_service" {
  source  = "terraform-aws-modules/lambda/aws"
  version = ">= 4.7.2"
  depends_on = [module.s3_bucket]

  function_name = "my-lambda-name"
  description   = "My Lambda Description"
  handler       = "index.httpHandler"
  runtime       = "nodejs16.x"
  publish       = true

  create_package = false
  hash_extra = aws_s3_object.app_dist_zip.source_hash
  s3_existing_package = {
    bucket = module.s3_bucket.s3_bucket_id
    key    = aws_s3_object.app_dist_zip.id
  }
  #  local_existing_package = "../device-service/dist/${local.app_package_name}"

  attach_tracing_policy    = true
  attach_policy_statements = true

  policy_statements = {
    dynamodb = {
      (...)
    }
  }

  allowed_triggers = {
    AllowExecutionFromAPIGateway = {
      service    = "apigateway"
      source_arn = "${module.api_gateway.apigatewayv2_api_execution_arn}/*/*/*"
    }
  }

  environment_variables = {
    (...)
  }
}

Versions

Expected behavior

module.lambda_service.lambda_function_source_code_hash would change everytime a new file is uploaded to the s3 bucket.

Actual behavior

module.lambda_service.lambda_function_source_code_hash doesn't change if a new file is uploaded to the s3 bucket.

Untanky commented 1 year ago

Having the same issue 👍🏻

dgbarmac commented 1 year ago

I have the same problem. It did not work even after specifying a new version for the s3 file. My aim is to separate the IAC and Lambda code and this problem do not allow such thing.

antonbabenko commented 1 year ago

Make sure to specify version_id for Lambda function to track changes there when key of S3 object is not changed but only version:

s3_existing_package = {
    bucket = module.s3_bucket.s3_bucket_id
    key    = aws_s3_object.app_dist_zip.id

    version_id = aws_s3_object.app_dist_zip.version_id # <-- this one
}

S3 bucket versioning should be enabled, too.

github-actions[bot] commented 1 year ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.