nutanix / terraform-provider-nutanix

Terraform Nutanix Provider
https://www.terraform.io/docs/providers/nutanix/
Mozilla Public License 2.0
101 stars 112 forks source link

Adding a NKE node pool corrupts the state #672

Open olivierboudet opened 9 months ago

olivierboudet commented 9 months ago

Nutanix Cluster Information

Terraform Version

Terraform v1.5.0
on linux_amd64
+ provider registry.terraform.io/nutanix/nutanix v1.9.1

Affected Resource(s)

resource "nutanix_karbon_cluster" "mycluster" {
  name       = "mycluster"
  version    = "1.25.6-0"
  storage_class_config {
    reclaim_policy = "Retain"
    volumes_config {
      file_system                = "ext4"
      flash_mode                 = true
      password                   = var.nutanix_password
      prism_element_cluster_uuid = "0005f997-7997-aa1a-5b4a-00620b377eb0"
      storage_container          = "NutanixKubernetesEngine"
      username                   = var.nutanix_user
    }
  }
  cni_config {
    node_cidr_mask_size = 24
    pod_ipv4_cidr       = "10.98.0.0/16"
    service_ipv4_cidr   = "10.99.0.0/16"
  }
  worker_node_pool {
    node_os_version = "ntnx-1.5"
    num_instances   = 1
    ahv_config {
      cpu = 10
      memory_mib = 16384
      network_uuid               = nutanix_subnet.kubernetes.id
      prism_element_cluster_uuid = "0005f997-7997-aa1a-5b4a-00620b377eb0"
    }
  }

  etcd_node_pool {
    node_os_version = "ntnx-1.5"
    num_instances   = 1
    ahv_config {
      cpu = 4
      memory_mib = 8192
      network_uuid               =  nutanix_subnet.kubernetes.id
      prism_element_cluster_uuid = "0005f997-7997-aa1a-5b4a-00620b377eb0"
    }
  }
  master_node_pool {
    node_os_version = "ntnx-1.5"
    num_instances   = 1
    ahv_config {
      cpu = 4
      memory_mib = 4096
      network_uuid               =  nutanix_subnet.kubernetes.id
      prism_element_cluster_uuid = "0005f997-7997-aa1a-5b4a-00620b377eb0"
    }
  }
  private_registry {
    registry_name = nutanix_karbon_private_registry.registry.name
  }

  lifecycle {
    ignore_changes = [
      worker_node_pool,
      storage_class_config,
    ]
  }
}

resource "nutanix_karbon_worker_nodepool" "mynodepool" {
  cluster_name = nutanix_karbon_cluster.mycluster.name
  name = "mynodepool"
  num_instances = 1
  node_os_version = "ntnx-1.5"

  ahv_config {
    cpu = 2
    memory_mib = 8192
    network_uuid               = nutanix_subnet.kubernetes.id
    prism_element_cluster_uuid = "0005f997-7997-aa1a-5b4a-00620b377eb0"
  }

  labels={
    partenaire="mypartenaire"
  }

}

Debug Output

Expected Behavior

After adding a nutanix_karbon_worker_nodepool it should be possible to add a second one. ie. this should work:

Actual Behavior

The last terraform apply actually fails with this output:

$ terraform apply
nutanix_karbon_cluster.mycluster: Refreshing state... [id=14e1857b-46b2-4f49-410a-2e8ed0ce22e9]

Planning failed. Terraform encountered an error while generating this plan.

╷
│ Warning: Disabled Providers: foundation, ndb. Please provide required fields in provider configuration to enable them. Refer docs.
│
│   with provider["registry.terraform.io/nutanix/nutanix"],
│   on main.tf line 19, in provider "nutanix":
│   19: provider "nutanix" {
│
╵
╷
│ Error: unable to expand node pool during flattening: nodepool name must be passed
│
│   with nutanix_karbon_cluster.mycluster,
│   on nke.tf line 1, in resource "nutanix_karbon_cluster" "mycluster":
│    1: resource "nutanix_karbon_cluster" "mycluster" {
│
╵

Steps to Reproduce