hashicorp / terraform-provider-google

Terraform Provider for Google Cloud Platform
https://registry.terraform.io/providers/hashicorp/google/latest/docs
Mozilla Public License 2.0
2.29k stars 1.72k forks source link

Bucket object gets destroyed when a resource with create_before_destroy has dependency on it #9127

Open kishorviswanathan opened 3 years ago

kishorviswanathan commented 3 years ago

Community Note

Terraform Version

Terraform v0.15.1
on linux_amd64

Affected Resource(s)

Terraform Configuration Files

variable "project" {
    description = "GCP project"
}
variable "region" {
    description = "GCP Region"
    default     = "us-central1"
}

variable "machine_type" {
    description = "GCE Machine type"
    default = "f1-micro"
}

provider "google" {
    project     = "${var.project}"
    region      = "${var.region}"
}

resource "google_compute_instance_template" "default" {
  name_prefix  = "instance-template-"

  machine_type         = "${var.machine_type}"

  disk {
    source_image = "debian-10"
    auto_delete  = true
    boot         = true
  }

  network_interface {
    subnetwork         = "default"
  }

  metadata = {
    object_hash = "${google_storage_bucket_object.test_object.md5hash}"
  }

  lifecycle {
      create_before_destroy = true
  }

}

resource "google_compute_instance_group_manager" "instance_group_manager" {
  name               = "instance-group-manager"
  version {
    instance_template  = "${google_compute_instance_template.default.self_link}"
  }
  base_instance_name = "instance-group-manager"
  zone               = "us-central1-f"
  target_size        = "1" 
}

resource "google_storage_bucket" "bucket" {
  name = "test-bucket-128319873"
}

resource "google_storage_bucket_object" "test_object" {
    name   = "test_object"
    bucket = google_storage_bucket.bucket.name
    source = "${path.module}/test_object"
}

Expected Behavior

When updating contents of test_object:

  1. google_storage_bucket_object should be replaced.
  2. Replacement for google_compute_instance_template should be created.
  3. Old google_compute_instance_template will be deleted.

Actual Behavior

  1. google_storage_bucket_object is created.
  2. Replacement for google_compute_instance_template is created
  3. Old google_compute_instance_template is deleted.
  4. google_storage_bucket_object is deleted.

Steps to Reproduce

  1. Put some content in the test_object file.
  2. terraform apply
  3. Update the content of test_object.
  4. terraform apply. This should trigger a replacement for the instance-template.
  5. Check if the google_storage_bucket_object still exists.

References

venkykuberan commented 3 years ago

@kishorv06 In the step 4, do you mean the new storage object created in step 1 is getting deleted ?

Actual Behavior
1. google_storage_bucket_object is created.
2. Replacement for google_compute_instance_template is created
3. Old google_compute_instance_template is deleted.
4. google_storage_bucket_object is deleted.

I don't item 4th isn't happening, the old storage object is correctly replaced by the new one.

Can you please attach your debug log to understand what's going on ?

You should only one DELETE call to the storage api alike below DELETE /storage/v1/b/cloudfunction-xxxx/o/startInstancePubSub1.zip?alt=json&prettyPrint=false HTTP/1.1

kishorviswanathan commented 3 years ago

@venkykuberan By new object, I guess you meant the replacement resource created because of the create_before_destory flag?

I am not sure if google_storage_bucket_object even supports create_before_destroy as the resource doesn't have a random suffix in the self_link and might have a unique constraint for the object name.

The end result in my case is, the object would go missing from the bucket and there would be a difference in the terraform plan.

I will post the debug log ASAP.

kishorviswanathan commented 3 years ago

Sorry, The example I have mentioned didn't reproduced the issue. So I have updated it to one that reproduces.

Debug log: https://pastebin.com/PQTQCx1M

As you can see in the debug log, the second apply ( after updating the content of file ) wasn't applied properly. The storage object was deleted and I had to apply the changes once again to create the object again.