hashicorp / terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
https://registry.terraform.io/providers/hashicorp/aws
Mozilla Public License 2.0
9.74k stars 9.1k forks source link

Dependency issue with ignition remote s3 configuration #14182

Open obourdon opened 4 years ago

obourdon commented 4 years ago

I have a data ignition_config that I render as JSON into a remote S3 bucket exposing HTTP interface and which is then used in the following blocks:

locals {
  machine_name    = "my-machine"
}

data "ignition_config" "machine_remote" {
   /// ... whatever needs to go there
}

resource "aws_s3_bucket_object" "machine_remote_config" {
  bucket       = "${data.aws_s3_bucket.ignition_remote_cfgs.id}"
  key          = "${local.machine_name}-ignition.json"
  content_type = "application/json"
  content      = "${data.ignition_config.machine_remote.rendered}"
}

data "ignition_config" "machine" {
  //depends_on = ["aws_s3_bucket_object.machine_ignition_remote_config"]

  replace = [
    {
      source = "http://${data.aws_s3_bucket.ignition_remote_cfgs.website_endpoint}/${local.machine-name}-ignition.json"
    },
  ]
}

If I comment out the depends on, as one can expect, there are cases where the machine starts before the creation of the S3 bucket contents and therefore the machine bootstrap mechanism fails with messages in the system logs saying that the remote ignition config can not be found and this is perfectly normal

If I uncomment the depends_on everything goes well but I lose the idem potency of terraform when I run apply/plan after 1st successful deployment:

 <= data.ignition_config.machine
      id:                                 <computed>
      rendered:                           <computed>
      replace.#:                          "1"
      replace.0.source:                   "http://s3-XXX-ignition-remote-configs.s3-website-MY-ZONE.amazonaws.com/my-machine-ignition.json"

-/+ aws_autoscaling_group.machine (new resource required)
      id:                                 "my machine - terraform-20200715133124855700000002" => <computed> (forces new resource)
      arn:                                "arn:aws:autoscaling:MY-ZONE:981467355511:autoScalingGroup:UUID:autoScalingGroupName/my machine- terraform-20200715133124855700000002" => <computed>
      default_cooldown:                   "300" => <computed>
      desired_capacity:                   "1" => "1"
      force_delete:                       "true" => "true"
      health_check_grace_period:          "30" => "30"
      health_check_type:                  "ELB" => "ELB"
      launch_configuration:               "terraform-20200715133124855700000002" => "${aws_launch_configuration.machine.id}"
      load_balancers.#:                   "0" => <computed>
      max_size:                           "1" => "1"
      metrics_granularity:                "1Minute" => "1Minute"
      min_size:                           "1" => "1"
      name:                               "my machine - terraform-20200715133124855700000002" => "my machine - ${aws_launch_configuration.machine.name}" (forces new resource)
      placement_group:                    " My Spread Placement Group" => "My Spread Placement Group"
      protect_from_scale_in:              "false" => "false"
      service_linked_role_arn:            "arn:aws:iam::UID:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling" => <computed>
      tag.#:                              "3" => "3"
...
      vpc_zone_identifier.#:              "3" => "3"
...
      wait_for_capacity_timeout:          "10m" => "10m"
      wait_for_elb_capacity:              "1" => "1"

-/+ aws_launch_configuration.machine (new resource required)
      id:                                 "terraform-20200715133124855700000002" => <computed> (forces new resource)
      arn:                                "arn:aws:autoscaling:MY-ZONE:UUID:launchConfiguration:UUID:launchConfigurationName/terraform-20200715133124855700000002" => <computed>
      associate_public_ip_address:        "false" => "false"
      ebs_block_device.#:                 "0" => <computed>
      ebs_optimized:                      "false" => <computed>
      enable_monitoring:                  "true" => "true"
      iam_instance_profile:               "nomad_client_profile" => "nomad_client_profile"
      image_id:                           "ami-XXX" => "ami-XXX"
      instance_type:                      "t3.medium" => "t3.medium"
      key_name:                           "admin_key" => "admin_key"
      name:                               "terraform-20200715133124855700000002" => <computed>
      root_block_device.#:                "0" => <computed>
      security_groups.#:                  "4" => "4"
...
      user_data:                          "22ebd747839361b3bef8858b156e74f857ea2fe6" => "f61342e7198cd31450f2ca1ca3bddec63c28513d" (forces new resource)

Plan: 2 to add, 0 to change, 2 to destroy.

I do not understand why the user_data in my launch configuration is changing therefore re-spawning all my machine(s) Any idea please ?

obourdon commented 4 years ago

Forgot to mention tf AWS provider 2.65 or 2.69, TF 0.11.14, ignition provider 1.2.1

obourdon commented 3 years ago

@ewbankkit @bflad @aeschright @paultyng @ryndaniels @nywilken sorry to ping you directly but it has been 2 months since I posted this and there as been no feedback yet and I desperately need to find what is wrong.

In the meantime, I have also reproduced this behaviour with TF 0.12.29 and AWS provider 3.6.0

If you could give me some hints on how to keep the "dependency order " without triggering the respawn I would greatly appreciate.

Many thanks in advance.

obourdon commented 2 years ago

Any insight on this please ?

obourdon commented 2 years ago

@breathingdust could you please put this back into bug state need triage please

obourdon commented 1 year ago

Any insights ?