hashicorp / terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
https://registry.terraform.io/providers/hashicorp/aws
Mozilla Public License 2.0
9.74k stars 9.1k forks source link

AppConfig hosted configuration version resource recreating the same version with updated content #20273

Open vishalpant opened 3 years ago

vishalpant commented 3 years ago

Community Note

Terraform CLI and Terraform AWS Provider Version

Terraform v0.14.10
+ provider registry.terraform.io/hashicorp/aws v3.50.0

Affected Resource(s)

Terraform Configuration

resource "aws_appconfig_hosted_configuration_version" "configuration" {
  for_each                 = { for profile in var.config_profiles : profile.name => profile }
  application_id           = aws_appconfig_application.application.id
  configuration_profile_id = aws_appconfig_configuration_profile.profile[each.key].configuration_profile_id
  content                  = file("${path.module}/../../${each.value.content_location}")
  content_type             = each.value["content_type"]
  description              = each.value["description"]
}

Debug Output

module.appconfig.aws_appconfig_application.application: Refreshing state... [id=app_id]
module.appconfig.aws_appconfig_deployment_strategy.strategy: Refreshing state... [id=deploy_strategy_id]
module.appconfig.aws_appconfig_environment.environment: Refreshing state... [id=env_id:app_id]
module.appconfig.aws_appconfig_configuration_profile.profile["test"]: Refreshing state... [id=config_profile_id:app_id]
module.appconfig.aws_appconfig_hosted_configuration_version.configuration["test"]: Refreshing state... [id=app_id/config_profile_id/1]
module.appconfig.aws_appconfig_hosted_configuration_version.configuration["test"]: Destroying... [id=app_id/config_profile_id/1]
module.appconfig.aws_appconfig_hosted_configuration_version.configuration["test"]: Destruction complete after 2s
module.appconfig.aws_appconfig_hosted_configuration_version.configuration["test"]: Creating...
module.appconfig.aws_appconfig_hosted_configuration_version.configuration["test"]: Creation complete after 3s [id=app_id/config_profile_id/1]

Expected Behavior

When there is a change in content, the resource should create a new version of hosted configuration keeping the current version intact.

Actual Behavior

When there is a change in content, Terraform is deleting the current hosted configuration version and recreating the same version with the updated content.

anGie44 commented 3 years ago

Hi @vishalpant, thank you for raising this issue. The aws_appconfig_hosted_configuration_version manages 1 distinct version thus any changes are reflected in that specific version due to the destroy/create behavior, so I believe it's behaving as expected. To create a distinct new version, I would recommend creating an additional resource configuration with the new content. Let me know if you have any additional questions/comments as the documentation could also make this more clear!

ryandeivert commented 3 years ago

@anGie44 chiming in here quickly, as I noticed the same thing and believe in fact the reporter is correct here. Hosted Configurations should allow for versioning, similar (in a sense) to AWS Lambda function versioning. Note in the attached screenshot the incremental "version" applied to a given hosted configuration resource.

Screenshot of Versioning

image

I believe the fix may be as easy as removing ForceNew for the content attribute

anGie44 commented 3 years ago

Hi @ryandeivert πŸ‘‹ Thank you for your note! The current problem I still see here is that if we allow the resource to create a new version with the same terraform configuration (by removing the ForceNew on arguments) and leave the previous version as-is, then the previous version would no longer be under terraform management so future resource deletion would only occur on the latest version thus leaving older versions behind. Though given the expected behavior noted in this issue, perhaps that is ok in practice. From my understanding of the provider's lambda function versioning capabilities, I think it benefits from the fact that the its the lambda function itself that's under terraform management, but not the versions themselves so new versions can be published on each function update.

With that said, this still presents a tricky problem imo given how terraform expects to manage resources. Unfortunately, I don't have an answer at this time, but I'll circle back here if there's an update πŸ‘

ryandeivert commented 3 years ago

thanks for the background @anGie44 - after some additional thought, it occurred to me that these are more akin to AWS Lambda Layer Versions, which provide the same 'versioning' functionality as the hosted configuration.

In the case of Lambda Layers, terraform will also publish a new layer version, which leaving the old one intact. It will then leave any prior versions "untracked", thus resulting in them not being destroyed upon terraform destroy (only the current version is destroyed). IMO, I think this behavior is preferable to the current destroy/recreate logic. This behavior could lead to issues if a certain deployment is pinned to use a given version of the hosted configuration that no longer exists (the same thing could happen if Lambda Layers were destroyed/recreated).

FWIW - I built the provider locally with the below changes, and it results in the desired behavior:

diff --git a/aws/resource_aws_appconfig_hosted_configuration_version.go b/aws/resource_aws_appconfig_hosted_configuration_version.go
index f9c0507c0..3c9809463 100644
--- a/aws/resource_aws_appconfig_hosted_configuration_version.go
+++ b/aws/resource_aws_appconfig_hosted_configuration_version.go
@@ -18,6 +18,7 @@ import (
 func resourceAwsAppconfigHostedConfigurationVersion() *schema.Resource {
    return &schema.Resource{
        Create: resourceAwsAppconfigHostedConfigurationVersionCreate,
+       Update: resourceAwsAppconfigHostedConfigurationVersionUpdate,
        Read:   resourceAwsAppconfigHostedConfigurationVersionRead,
        Delete: resourceAwsAppconfigHostedConfigurationVersionDelete,
        Importer: &schema.ResourceImporter{
@@ -44,7 +45,6 @@ func resourceAwsAppconfigHostedConfigurationVersion() *schema.Resource {
            "content": {
                Type:      schema.TypeString,
                Required:  true,
-               ForceNew:  true,
                Sensitive: true,
            },
            "content_type": {
@@ -95,6 +95,14 @@ func resourceAwsAppconfigHostedConfigurationVersionCreate(d *schema.ResourceData
    return resourceAwsAppconfigHostedConfigurationVersionRead(d, meta)
 }

+func resourceAwsAppconfigHostedConfigurationVersionUpdate(d *schema.ResourceData, meta interface{}) error {
+   if d.HasChange("content") {
+       resourceAwsAppconfigHostedConfigurationVersionCreate(d, meta)
+   }
+
+   return resourceAwsAppconfigHostedConfigurationVersionRead(d, meta)
+}
+
 func resourceAwsAppconfigHostedConfigurationVersionRead(d *schema.ResourceData, meta interface{}) error {
    conn := meta.(*AWSClient).appconfigconn
anGie44 commented 3 years ago

Ohh nice call to lambda layer versions @ryandeivert and thanks for the code snippet! That actually brings a good point to keep in mind that the AWS services' behaviors differ during destroy/create, unfortunately. In the case of lambda layer versions, behind the scenes we actually explicitly call DeleteLayerVersion with the current version number before creating the new resource (similar to the Delete behavior we have in AppConfig).

https://github.com/hashicorp/terraform-provider-aws/blob/e60926c7ba1e3bce6bece9a775fa4560933bf1d6/aws/resource_aws_lambda_layer_version.go#L241-L259

And then when the new resource is created, AWS lambda automatically increments the version number. So if a layer version is created, identified as version 1, a subsequent destroy/create behavior would result in a version 2. In this case, technically terraform does its part to keep track of the resource but it's super interesting that the upstream lambda API's handling is different from what we're seeing in AppConfig.

ejsolberg commented 2 years ago

I noticed this when testing app-config rollbacks. The rollback seems to functionally work. The previous (valid) contents are returned to clients via get-configuration. However, using the AWS console it does not surface this re-activated version of the config (which was destroyed by terraform). In addition to the confusion this may cause, I worry that this may not be an official supported state and might break in the future.

Could the terraform lifecycle of the hosted configuration operate similarly to the lifecycle for s3 objects (aws_s3_bucket_object)? Content changes update in place for the terraform resource, but produce a new version in s3. The old versions are not deleted unless the object resource itself is destroyed. On destroy of the tf object resource, all previous versions are deleted from s3 as well.

srhaber commented 2 years ago

I've had success using the create_before_destroy = true lifecycle hook. This seems to do the trick. It bumps hosted configuration version and then deletes the prior version.

gsiffert commented 1 year ago

Any update from the maintainers ? For your information the behavior described in this thread is the one adopted by CDK and CloudFormation.

yipsmith commented 1 year ago

bump...any movement on this?

GaxZE commented 1 year ago

I think I got around this, maybe not.. but thought to share incase it helps anybody.

resource "aws_appconfig_hosted_configuration_version" "logging_version_true" {
  application_id           = aws_appconfig_application.app_a.id
  configuration_profile_id = aws_appconfig_configuration_profile.config_logs.configuration_profile_id
  content_type             = "application/json"

  content = jsonencode({ "logs_enabled" : true })

}

resource "aws_appconfig_hosted_configuration_version" "logging_version_false" {
  application_id           = aws_appconfig_application.app_a.id
  configuration_profile_id = aws_appconfig_configuration_profile.config_logs.configuration_profile_id
  content_type             = "application/json"

  content = jsonencode({ "logs_enabled" : false })

  depends_on = [
    aws_appconfig_hosted_configuration_version.logging_version_true
  ]
}

Using depends_on allows it to delete/create as we need it to. Destroy, reapplying all working as I need.

mfabricanti commented 1 year ago

+1 any updates on this?

We're trying to implement app config here but this behavior (recreates current version instead creating a new one) is a blocker.

anilchalissery commented 1 year ago

I created a terraform module where we could create an app config application, environment, profile, hosted version, and deployment.

When we create deployment we pass the hosted version number to it. On each edit of this hosted version, it destroys the current one and creates a new one with the same version number. due to this deployment resource doesn't detect the change in value and is pointed to the same old value.

For EX, on the first run, we create a hosted version 1 and deploys(app config deployment) 1 on the second run, it deletes hosted version and while running the deployment part it doesn't detect change since hosted version 1 is already deployed.

Isn't this the same issue, am I missing something

hmorgado commented 11 months ago

bump βž•

any updates on this? thank you

hmorgado commented 11 months ago

I've had success using the create_before_destroy = true lifecycle hook. This seems to do the trick. It bumps hosted configuration version and then deletes the prior version.

Hi! Could you kindly share your code for this? Also, if it deletes the older version, did you lose your versioning then? Thanks

vishalkc commented 10 months ago

+1 Any update on this one? We have a requirement to keep all configuration versions and I am blocked because of this limitation.

obondarenko1 commented 9 months ago

Bump. Any chance this will be fixed any time soon?

barneyparker commented 8 months ago

Any updates on this? @YakDriver added a link to a resolution used for aws_ecs_task_definition resources where a skip_destroy flag was added to tell terraform to create new versions of ECS task defs and not delete the old ones.

It feels like this would be a nice work-around to allow us to continue working within Terraform rather than the current (only) option of deploying an initial config from terraform, but subsequent configs using the AWS CLI, and using a lifecycle { ignore_changes = "content" } block.

mnylensc commented 7 months ago

Yeah this would be great. I'm working around this limitation now with using a Lambda function to create the hosted configuration version and then using aws_lambda_invocation resource from terraform to call it:

module "create_hosted_configuration_version_lambda" {
  source        = "terraform-aws-modules/lambda/aws"
  version       = "~> 7.2.0"
  function_name = "appconfig-create-hosted-configuration-version"
  handler       = "index.handler"
  runtime       = "python3.8"
  publish       = true
  source_path   = "${path.module}/lambda/create_hosted_configuration_version"

  create_role              = true
  attach_policy_statements = true

  policy_statements = {
    allow-appconfig-create-hosted-configuration-version = {
      effect    = "Allow"
      actions   = ["appconfig:CreateHostedConfigurationVersion"]
      resources = ["*"]
    }
  }
}

The lambda function code in ${path.module}/lambda/create_hosted_configuration_version/index.py:

import boto3

def handler(event, context):
    appconfig = boto3.client('appconfig')

    application_id = event['application_id']
    description = event['description']
    configuration_profile_id = event['configuration_profile_id']
    content_type = event['content_type']
    content = event['content']

    response = appconfig.create_hosted_configuration_version(
        ApplicationId=application_id,
        Description=description,
        ConfigurationProfileId=configuration_profile_id,
        ContentType=content_type,
        Content=content.encode('utf-8')
    )

    return {
        'VersionNumber': response['VersionNumber'],
    }

Then, to use it:

resource "aws_lambda_invocation" "create_hosted_configuration_version" {
  function_name = module.create_hosted_configuration_version_lambda.lambda_function_name

  input = jsonencode({
    application_id                = aws_appconfig_application.default.id
    description                     = "My configuration"
    configuration_profile_id = aws_appconfig_configuration_profile.default.configuration_profile_id
    content_type                  = "application/json"
    content                           = file("${path.module}/configs/some-configuration.json")
  })
}

locals {
  configuration_version = jsondecode(aws_lambda_invocation.create_hosted_configuration_version.result)["VersionNumber"]
}

resource "aws_appconfig_deployment" "default" {
  application_id           = aws_appconfig_application.default.id
  environment_id           = aws_appconfig_environment.default.environment_id
  configuration_profile_id = aws_appconfig_configuration_profile.default.configuration_profile_id
  configuration_version    = local.configuration_version
  deployment_strategy_id   = aws_appconfig_deployment_strategy.all_at_once_fast.id
}

As an added bonus, when using the aws_lambda_invocation resource, terraform plan can show what exactly changes in the input and thus in the configuration.

This could be improved further by using the new CRUD lifecycle in aws_lambda_function_invocation resource, that on delete it would destroy the old versions and on create/update it would create a new version. As it is now above, it prevents terraform destroy from completing, because you can't delete hosted configuration if it still has versions in it.

ezequiel-navarrete-uala commented 2 months ago

Hi, any update on this? i have a similar problem it woulbe be solved by the same issue, we cant see the changes in the content because the resource its recreated, so changing the version could help to not erase it, thanks for the update

Zordrak commented 1 month ago

For what it's worth, my use case is comfortable with the recreation of the same version, although in fact I think the version is being set by the JSON input - it's certainly part of the input schema, but i can't get the plan/apply not to recreate it on every run. The input, without changing, is being deemed sensitive and indeterminate and so forcing a new resource with no changes.

Edit: I found the cause of this and it may even need to be a new issue although it's related.

When you update FeatureFlag content, AWS updates the _createdAt and _updatedAt values accordingly. Which means every single apply updates the content value so that its not possible to give input that will match the subsequent output, and therefore it doesnt seem possible to use terraform to set a hosted configuration version without a permadiff.

As a result I can only think to add ignore_changes and hope for the best - but that will mean forcing a taint for any updates. Bit of an impasse :'(

Edit 2: Workaround successful. For anyone hitting this comment from google, this is what I did:

resource "aws_appconfig_hosted_configuration_version" "featureflags" {
  application_id           = aws_appconfig_application.featureflags.id
  configuration_profile_id = aws_appconfig_configuration_profile.featureflags.configuration_profile_id
  description              = "${local.csi} Feature Flags"
  content_type             = "application/json"
  content                  = jsonencode(local.featureflags)

  lifecycle {
    ignore_changes = [ 
      content,
    ]

    replace_triggered_by = [ 
      terraform_data.featureflags,
    ]
  }
}

resource "terraform_data" "featureflags" {
  input = local.featureflags
}
soisyourface commented 4 weeks ago

Is there a consensus on whether or not this work should be done? Perhaps to match CDK/CloudFormation as indicated in an earlier comment?