digitalocean / terraform-provider-digitalocean

Terraform DigitalOcean provider
https://registry.terraform.io/providers/digitalocean/digitalocean/latest/docs
Mozilla Public License 2.0
492 stars 260 forks source link

Changing k8 default node pool autoscaling parameters causes TF to lose all information about resources within the cluster #790

Open LaurisJakobsons opened 2 years ago

LaurisJakobsons commented 2 years ago

Bug Report

Describe the bug

Whenever changes are made to the (default) node pool autoscaling parameters within the digitalocean_kubernetes_cluster resource, terraform loses all the information about resources that were created within the cluster. This breaks the terraform state to the point where everything needs to be destroyed & re-applied to fix it.

Let's say I've created digitalocean_kubernetes_cluster resource and along with that, added several kubernetes resources within the same cluster using terraform. If I change the autoscaling parameters of the default node pool within the digitalocean_kubernetes_cluster resource, terraform loses all the information about kubernetes resources that were created in the cluster and tries to apply them again, resulting in numerous resource "already exists" errors, because they are still present on the cluster, just terraform has lost all the information about them.

(NOTE: Autoscaling changes are applied correctly)

Affected Resource(s)

Expected Behavior

Node pool autoscaling changes should be applied without causing terraform to lose information about other resources created within the cluster.

Actual Behavior

All the kubernetes resources created within the cluster are lost from terraform state.

Steps to Reproduce

  1. Create a digitalocean_kubernetes_cluster
  2. Add resources within the cluster using kubernetes provider
  3. Edit digitalocean_kubernetes_cluster node pool autoscaling parameters & apply changes

Terraform Configuration Files

resource "digitalocean_kubernetes_cluster" "primary" {
  name     = var.cluster_name
  region   = var.cluster_region
  version  = data.digitalocean_kubernetes_versions.current.latest_version
  vpc_uuid = digitalocean_vpc.cluster_vpc.id

  node_pool {
    name = "${var.cluster_name}-node-pool"
    size = var.worker_size
    auto_scale = true
    min_nodes = 1
    max_nodes = var.max_worker_count ## Issue occurs when this value is changed and re-applied
    tags = [local.cluster_id_tag]
  }
}

Additional context

Using terraform version 1.1.5, digitalocean provider 2.17.1, kubernetes provider 2.8.0

mkjmdski commented 2 years ago

This is related to: https://github.com/digitalocean/terraform-provider-digitalocean/issues/424 because terraform looses provider connection data when you are changing node pool size. The lifecycle will want to remove and create new cluster and though at this point data in kubernetes provider retrieved from digitalocean_kubernetes_cluster resource is unknown (because will be known after apply) in terraform run resulting in error.