vultr / terraform-provider-vultr

Terraform Vultr provider
https://www.terraform.io/docs/providers/vultr/
Mozilla Public License 2.0
191 stars 65 forks source link

[BUG] - vultr_kubernetes fails to register cluster deletion #313

Open johnjmartin opened 1 year ago

johnjmartin commented 1 year ago

Describe the bug Sometimes, when a vultr k8s cluster gets deleted - either manually or when applying a destructive terraform update, the terraform provider does not properly register the deletion. This causes terraform to fail when updating state.

To Reproduce Steps to reproduce the behavior:

  1. Manually create a vultr k8s cluster outside of tf
  2. add a vultr k8s cluster to tf with the same name
  3. Apply this change, allow vultr to replace the k8s cluster:
    $ terraform apply
    # vultr_kubernetes.sjc must be replaced
    -/+ resource "vultr_kubernetes" "sjc" {
      ~ cluster_subnet = "10.244.0.0/16" -> (
      + region         = "sjc" # forces replacement
    ... 
  4. See error
    vultr_kubernetes.sjc: Destroying... [id=c3b49514-745e-4fa5-b60e-8c3f19d128a8]
    vultr_kubernetes.sjc: Still destroying... [id=c3b49514-745e-4fa5-b60e-8c3f19d128a8, 10s elapsed]
    vultr_kubernetes.sjc: Still destroying... [id=c3b49514-745e-4fa5-b60e-8c3f19d128a8, 20s elapsed]
    vultr_kubernetes.sjc: Still destroying... [id=c3b49514-745e-4fa5-b60e-8c3f19d128a8, 30s elapsed]
    vultr_kubernetes.sjc: Still destroying... [id=c3b49514-745e-4fa5-b60e-8c3f19d128a8, 40s elapsed]
    ╷
    │ Error: error deleting VKE c3b49514-745e-4fa5-b60e-8c3f19d128a8 : gave up after 4 attempts, last error: "{\"error\":\"Internal server error.\",\"status\":500}"
  5. continue to get errors on all terraform plans/apply from now on:
    $ tf plan -target=vultr_kubernetes.sjc                                                                                                                                                                            
    │ Error: error getting cluster (c3b49514-745e-4fa5-b60e-8c3f19d128a8): gave up after 4 attempts, last error: "{\"error\":\"Internal server error.\",\"status\":500}"

Expected behavior I expect vultr to properly tear down and stand up the new k8s cluster.

Versions

tf --version                                                                                                                                                                                       
Terraform v1.3.7
on darwin_arm64
+ provider registry.terraform.io/hashicorp/aws v4.50.0
+ provider registry.terraform.io/vultr/vultr v2.12.0
optik-aper commented 1 year ago

@johnjmartin what do you mean in 2? A cluster created in my.vultr and through terraform will have different IDs and unless you import the cluster created outside of terraform they're two separate clusters. Are you using terraform import to bring it into terraform state?

johnjmartin commented 1 year ago

Yes, we used terraform import to try to bring the cluster into the tf state. However, the import did not work completely: in step 3. vultr still wanted to replace the resource.

optik-aper commented 1 year ago

Thanks for the clarification. I'll test this out today

johnjmartin commented 1 year ago

FYI I ended up resolving this by manually removing the invalid clusters from my terraform state file