hashicorp / terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
https://registry.terraform.io/providers/hashicorp/aws
Mozilla Public License 2.0
9.85k stars 9.2k forks source link

AWS Lambda Function Environment Variables not refreshing state properly #1883

Open Bjorn248 opened 7 years ago

Bjorn248 commented 7 years ago

Basically, terraform tries to add the environment variables to the lambda function during every run. During the first run, it successfully creates the lambda function but during subsequent plans and applies it always determines that the lambda function's environment variables are not properly set and therefore must create them. This results in an apply that never goes through completely and just hangs. As I'm writing this it's at 11m30s elapsed. Will update this issue with any new output.

Terraform Version

Tested on: Terraform v0.10.7 and Terraform v0.9.6

Affected Resource(s)

Please list the resources as a list, for example:

Terraform Configuration Files

resource "aws_lambda_function" "asg_controller" {
  filename         = "asg_controller_lambda.js.zip"
  function_name    = "some_env_asg_controller"
  role             = "${aws_iam_role.lambda_role_asg_controller.arn}"
  handler          = "asg_controller_lambda.handler"
  source_code_hash = "${base64sha256(file("asg_controller_lambda.js.zip"))}"
  runtime          = "nodejs6.10"
  timeout          = 60

  tags {
    Name        = "some_env_asg_controller"
    terraform   = true
    environment = "some_env"
  }

  environment {
    variables = {
      AWS_SQS_URL = "${aws_sqs_queue.some_queue.id}"
      ASG_NAME    = "${aws_autoscaling_group.some_asg.name}"
    }
  }
}

Expected Behavior

Terraform plan/apply should not try to make any changes on subsequent runs after the first apply

Actual Behavior

This is the result of a plan

~ aws_lambda_function.some_asg_controller
    environment.0.variables.%:           "0" => "2"
    environment.0.variables.ASG_NAME:    "" => "some_env_workers"
    environment.0.variables.AWS_SQS_URL: "" => "https://some_redacted_SQS_url"

The result of the apply is that is indefinitely is in the "still modifying" state for the aws_lambda_function resource.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. Have an aws_lambda_function with environment variables specified that use interpolated values? Not sure if this matters.
  2. Apply. The first apply should succeed. The function and environment variables are set properly, everything on the AWS side is working as intended.
  3. Plan and apply again, notice how terraform still cannot see that the environment variables are correctly set in the lambda function. It will try to add them again, and indefinitely be "still modifying" the function.
Bjorn248 commented 7 years ago

Here's the error. It seems to be related to KMS permissions. Had to let it run for 1 hour and 20 minutes before this popped up.

* aws_lambda_function.asg_controller: 1 error(s) occurred:

* aws_lambda_function.asg_controller: Error modifying Lambda Function Configuration asg_controller: ServiceException: Lambda was unable to decrypt your environment variables because the KMS access was denied. Please check your KMS permissions. KMS Exception: AccessDeniedException KMS Message: The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access.
    status code: 500, request id: redacted

EDIT: I tried using my own custom kms key with the lambda function and adding the kms:Decrypt permission to the iam role that the lambda function assumes, and the function continues to work but terraform seems to behave the same way.

sasq31 commented 6 years ago

I am also getting the same issue.

julian-alarcon commented 5 years ago

I'm getting the same error, my Lambda function is really simple. The tag is not even a variable but fixed value.

environment { variables = { TAG = "schedule" } }

alanhughes commented 4 years ago

I ran into this issue too - for any future folk who stumble upon this, I fixed it by ensuring that my user had kms:Decrypt permissions. Probably terraform could error out with a better error message though, so I don't think this should be closed

nikoremi97 commented 4 years ago

Thanks @alanhughes!!

jihoun commented 4 years ago

Unfortunately adding kms:Decrypt did not do the trick for me.

ddanf commented 3 years ago

also had this issue, but no errors in either plan or apply... solved it by giving the terraform deploy role kms:Decrypt and kms:ReEncrypt.

Would be nice if TF failed in plan and apply when it encounters this.