Open eduardocque opened 1 year ago
Voting for Prioritization
Volunteering to Work on This Issue
Hey @eduardocque 👋 Thank you for taking the time to raise this! I noticed that in your configuration, this resource is within a module. Is that module dependent on any other changes in the configuration? If so, that might be delaying the data source reads until apply time, which might explain this behavior. If you can supply them, debug logs (redacted as needed) might help us look into this a bit more too.
Hi @justinretzolk well the module is very simple and is completely independent, basically is just to group a few resources and be able to reuse it
full main.tf (module)
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
data "archive_file" "lambda" {
type = "zip"
source_file = var.source_path
output_path = "./builds/${var.name}.zip"
}
resource "aws_iam_role" "lambda_role" {
name = "lambda-${var.project_name}-${var.name}-LambdaRole"
path = "/"
assume_role_policy = jsonencode({
"Version" : "2012-10-17",
"Statement" : [
{
"Action" : "sts:AssumeRole",
"Principal" : {
"Service" : [
"lambda.amazonaws.com",
"edgelambda.amazonaws.com"
]
},
"Effect" : "Allow",
"Sid" : ""
}
]
})
}
resource "aws_iam_role_policy_attachment" "lambda_inst_role_attc_execution" {
role = aws_iam_role.lambda_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
resource "aws_iam_role_policy_attachment" "lambda_inst_role_attc_cloud_watch" {
role = aws_iam_role.lambda_role.name
policy_arn = "arn:aws:iam::aws:policy/CloudWatchLogsFullAccess"
}
resource "aws_iam_role_policy_attachment" "lambda_inst_role_attc_dynamodb" {
count = var.hasDynamoDB ? 1 : 0
role = aws_iam_role.lambda_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess"
}
# locals {
# environment_map = var.environment[*]
# }
resource "aws_lambda_function" "lambda_function" {
filename = data.archive_file.lambda.output_path
function_name = var.name
description = var.description
role = aws_iam_role.lambda_role.arn
handler = "index.handler"
source_code_hash = data.archive_file.lambda.output_base64sha256
runtime = var.runtime
publish = true
# dynamic "environment" {
# for_each = local.environment_map
# content {
# variables = environment.value
# }
# }
}
variables.tf
variable "project_name" {
description = "Plitzi Project Name"
type = string
default = "plitzi"
}
variable "name" {
description = "Lambda Name"
type = string
default = ""
}
variable "description" {
description = "Lambda Description"
type = string
default = ""
}
variable "runtime" {
description = "Lambda Runtime"
type = string
default = "nodejs18.x"
}
variable "source_path" {
description = "Lambda Source Path"
type = string
default = ""
}
variable "environment" {
description = "Lambda Environment Variables"
type = map(string)
default = {}
}
variable "hasDynamoDB" {
description = "Lambda has DynamoDB"
type = bool
default = false
}
this is how was called
module "lambda_deployment_redirect" {
source = "./modules/lambda"
source_path = "./functions/deployment-redirect/index.mjs"
name = "DeploymentRedirect"
description = "Lambda function to redirect the deployments"
project_name = var.project_name
hasDynamoDB = true
providers = {
aws = aws.global
}
}
If you notice, the environment code is commented, this way it works fine, if I uncomment it, that's when the problem starts to occur
if u have more question feels free to ask thx
See @antonbabenko PR for a fix that needs to be made in the provider. The fields are marked optional in the docs, but leaving them out makes future applies end up in a constant state of drift.
Thank you @datfinesoul - that helped! (that is, setting all the optional logging_config
fields (apart from log_group
) made the "forever updates" go away).
I am experiencing this issue too, like @eduardocque I to have this problem triggered from an aws_lambda_function in a module, all 4 logging_config attributes are configured per @antonbabenko.
Terraform 1.9.2 & hashicorp/aws v5.61.0
My 'workaround' is just to have a lifecycle policy in place ignoring these changes. Its ok for now but long term i'm not super happy with it, ive been through the debug logs and couldnt identify what was triggering the calculation of the resource information.
lifecycle {
ignore_changes = [
qualified_arn,
qualified_invoke_arn,
version
]
}
You also now get this in the output from terraform <action>
Warning: Redundant ignore_changes element
│
│ on ../../modules/lambda/lambda_function.tf line 1, in resource "aws_lambda_function" "main":
│ 1: resource "aws_lambda_function" "main" {
│
│ Adding an attribute name to ignore_changes tells Terraform to ignore future changes to the argument in configuration after the object has been created,
│ retaining the value originally configured.
│
│ The attribute qualified_arn is decided by the provider alone and therefore there can be no configured value to compare with. Including this attribute in
│ ignore_changes has no effect. Remove the attribute from ignore_changes to quiet this warning.
│
│ (and 2 more similar warnings elsewhere)
Terraform Core Version
1.5.5
AWS Provider Version
5.16.1
Affected Resource(s)
aws_lambda_function
Expected Behavior
after do
plan
orapply
if i havent do any change to my lambda function code or environment variables, this should not try to deploy it again over and overActual Behavior
each time that i do plan or apply this try to update
qualified_arn
andqualified_invoke_arn
even if i havent change the code or environment variables or environment variables is emptyRelevant Error/Panic Output Snippet
Terraform Configuration Files
Steps to Reproduce
im just running
terraform plan
and each time that i do that, the previous output happensif is the first time its fine because we have to apply the changes, but after that you do again
terraform plan
you will notice the previous outputDebug Output
No response
Panic Output
No response
Important Factoids
No response
References
No response
Would you like to implement a fix?
None