hashicorp / terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
https://registry.terraform.io/providers/hashicorp/aws
Mozilla Public License 2.0
9.72k stars 9.08k forks source link

CloudFront distribution always shows to be updated when origin_shield is added #24323

Open sivanovhm opened 2 years ago

sivanovhm commented 2 years ago

Community Note

Terraform CLI and Terraform AWS Provider Version

Terraform v1.1.7
on linux_amd64
+ provider registry.terraform.io/hashicorp/archive v2.2.0
+ provider registry.terraform.io/hashicorp/aws v4.10.0
+ provider registry.terraform.io/hashicorp/external v2.2.2
+ provider registry.terraform.io/hashicorp/null v3.1.1
+ provider registry.terraform.io/hashicorp/random v3.1.2
+ provider registry.terraform.io/hashicorp/template v2.2.0
+ provider registry.terraform.io/integrations/github v4.11.0
+ provider registry.terraform.io/pagerduty/pagerduty v2.4.0

Affected Resource(s)

Terraform Configuration Files

Full aws_cloudfront_distribution configuration

Expected Behavior

performing a terraform plan after origin_shield is already added via terraform should not mark aws_cloudfront_distribution for in-place-update. Vice-versa for setting it to false.

Actual Behavior

Doing a terraform plan after origin_shield is already added via terraform shows aws_cloudfront_distribution to be updated in-place. Vice-versa if it is set to false it still shows it is going to set it to false.

      + origin {
          + connection_attempts = 1
          + connection_timeout  = 10
          + domain_name         = "some-alb-domain.com"
          + origin_id           = "alb-origin"

          + custom_origin_config {
              + http_port                = 80
              + https_port               = 443
              + origin_keepalive_timeout = 5
              + origin_protocol_policy   = "https-only"
              + origin_read_timeout      = 60
              + origin_ssl_protocols     = [
                  + "TLSv1.2",
                ]
            }

          + origin_shield {
              + enabled              = false
              + origin_shield_region = "eu-west-1"
            }
        }
      - origin {
          - connection_attempts = 1 -> null
          - connection_timeout  = 10 -> null
          - domain_name         = "some-alb-domain.com" -> null
          - origin_id           = "alb-origin" -> null

          - custom_origin_config {
              - http_port                = 80 -> null
              - https_port               = 443 -> null
              - origin_keepalive_timeout = 5 -> null
              - origin_protocol_policy   = "https-only" -> null
              - origin_read_timeout      = 60 -> null
              - origin_ssl_protocols     = [
                  - "TLSv1.2",
                ] -> null
            }
        }

Moreover, when we tried to workaround this issue with a dynamic block:

 dynamic "origin_shield" {
      for_each = var.enable_cloudfront_origin_shield ? [1] : []
      content {
        enabled              = true
        origin_shield_region = data.aws_region.current.name
      }
    }

We observed the following:

      - origin {
          - connection_attempts = 1 -> null
          - connection_timeout  = 10 -> null
          - domain_name         = "some-alb-domain.com" -> null
          - origin_id           = "alb-origin" -> null

          - custom_origin_config {
              - http_port                = 80 -> null
              - https_port               = 443 -> null
              - origin_keepalive_timeout = 5 -> null
              - origin_protocol_policy   = "https-only" -> null
              - origin_read_timeout      = 60 -> null
              - origin_ssl_protocols     = [
                  - "TLSv1.2",
                ] -> null
            }

          - origin_shield {
              - enabled              = true -> null
              - origin_shield_region = "eu-central-1" -> null
            }
        }
      + origin {
          + connection_attempts = 1
          + connection_timeout  = 10
          + domain_name         = "some-alb-domain.com
          + origin_id           = "alb-origin"

          + custom_origin_config {
              + http_port                = 80
              + https_port               = 443
              + origin_keepalive_timeout = 5
              + origin_protocol_policy   = "https-only"
              + origin_read_timeout      = 60
              + origin_ssl_protocols     = [
                  + "TLSv1.2",
                ]
            }
        }

Steps to Reproduce

  1. Add
    origin_shield {
      enabled              = var.enable_cloudfront_origin_shield
      origin_shield_region = data.aws_region.current.name
    }

    to your aws_cloudfront_distribution resource for your ALB origin where enable_cloudfront_origin_shield is a boolean variable.

  2. terraform plan
  3. terraform apply
  4. terraform plan

References

justinretzolk commented 2 years ago

Hey @sivanovhm 👋 Thank you for taking the time to raise this. I'm mostly acting in triaging this issue, but I noticed one callout in the aws_cloudfront_distribution documentation:

CloudFront distributions take about 15 minutes to reach a deployed state after creation or modification.

This note was more talking about deletion after creation/modification, but I'm wondering if you may be hitting some eventual consistency issues here. If you wait 15 minutes or so after running an apply, does the same issue persist?

sivanovhm commented 2 years ago

Hey @justinretzolk, I confirm that even after 15 minutes (waited 60 minutes, just in case), origin shield still shows as enabled on the distribution.

Please note that this is only when a dynamic block is used.

If used with "normally", and enabled = false there is not problem with disabling origin shield for the distribution. The main goal of this issue is to stop the repeated in-place updates that terraform shows on plan.

I believe that these are most likely 2 separate problems, which a caused by the same thing.

tom10271 commented 1 year ago

For our case Terraform always resolve default_ttl and max_ttl need to update

# aws_cloudfront_distribution.image-handler-cdn will be updated in-place
  ~ resource "aws_cloudfront_distribution" "image-handler-cdn" {
        id                             = "E3TE5L31UZX1IO"
        tags                           = {}
        # (18 unchanged attributes hidden)

      ~ ordered_cache_behavior {
          ~ default_ttl            = 0 -> 86400
          ~ max_ttl                = 0 -> 31536000
            # (11 unchanged attributes hidden)

            # (1 unchanged block hidden)
        }

        # (4 unchanged blocks hidden)
    }
Rorkal commented 1 year ago

I had @tom10271 same issue.

In my case it was my ordered_cache_behavior wich was misconfigured:

If using Managed-CachingDisabled, just set _defaultttl and _maxttl to 0.

tom10271 commented 1 year ago

My finding is if you are using cache policy, you don't need to specific the ttls at all, just delete them

jakubjakubeuvic commented 11 months ago

Any update on that?

kwn commented 1 month ago

My finding is if you are using cache policy, you don't need to specific the ttls at all, just delete them

Unless you prefer to control it... The default TTL for Managed-CachingOptimized is 1 day, which might be too short in some cases.

tom10271 commented 1 month ago

My finding is if you are using cache policy, you don't need to specific the ttls at all, just delete them

Unless you prefer to control it... The default TTL for Managed-CachingOptimized is 1 day, which might be too short in some cases.

No genius, the point is if you want to set the TTL, you should set it in Cache policy but not in CloudFront Distribution.

kwn commented 1 month ago

No genius, the point is if you want to set the TTL, you should set it in Cache policy but not in CloudFront Distribution.

Why would I create and maintain my own policy if I can just override default values of the AWS managed one? Less resources to maintain, less references to pass between modules, less complexity is definitely worth it.

tom10271 commented 1 month ago

No genius, the point is if you want to set the TTL, you should set it in Cache policy but not in CloudFront Distribution.

Why would I create and maintain my own policy if I can just override default values of the AWS managed one? Less resources to maintain, less references to pass between modules, less complexity is definitely worth it.

The reason is extremely simple, because there is not input field to set TTL at all if you are editing Cache policy for CloudFront Distribution behaviour. This is how AWS works. You would say Terraform allows so which is wrong but AWS simply does not allow user to set TTL in Distribution level but declare the TTL in Cache Policy only. And yes if you are not happy with the default TTL which is 86400 only, you have to create your own Cache policy.