rancher / terraform-provider-rke

Terraform provider plugin for deploy kubernetes cluster by RKE(Rancher Kubernetes Engine)
Mozilla Public License 2.0
340 stars 151 forks source link

`terraform apply` always wants to make changes #336

Open waldner opened 2 years ago

waldner commented 2 years ago

Here's a sample:

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with
the following symbols:
  ~ update in-place
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # local_file.kubeconfig must be replaced
-/+ resource "local_file" "kubeconfig" {
      ~ content              = (sensitive) -> (sensitive) # forces replacement
      ~ id                   = "a0d6ca2b35a8f4b9324e86a42c11a0bb1041227c" -> (known after apply)
        # (3 unchanged attributes hidden)
    }

  # module.rke_vsphere.rke_cluster.cluster will be updated in-place
  ~ resource "rke_cluster" "cluster" {
        id                        = "7d30063d-14ae-4191-a445-23b6920f63ea"
      ~ kube_config_yaml          = (sensitive value)
      ~ rke_cluster_yaml          = (sensitive value)
      ~ rke_state                 = (sensitive value)
        # (24 unchanged attributes hidden)

      ~ ingress {
          - http_port       = 80 -> null
          - https_port      = 443 -> null
          - network_mode    = "hostPort" -> null
            # (5 unchanged attributes hidden)
        }

        # (6 unchanged blocks hidden)
    }

Plan: 1 to add, 1 to change, 1 to destroy.

Here's my cluster code:

terraform {
  required_providers {
    vsphere = { 
      source  = "hashicorp/vsphere"
      version = "2.1.1"
    }

    template = {
      source  = "hashicorp/template"
      version = "2.2.0"
    }

    rke = {
      source  = "rancher/rke"
      version = "1.3.0"
    }
  }
}

...

# creates the nodes
module "nodes" {
  source = "./nodes"
  vsphere_config = local.vsphere_config
  cluster_nodes = local.cluster_config["cluster_nodes"]
}

resource "rke_cluster" "cluster" {

  cluster_name = local.cluster_config["cluster_name"]

  cloud_provider {
    name = "vsphere"
    vsphere_cloud_provider {
      global {
        insecure_flag = true
      }
      virtual_center {
        datacenters = local.vsphere_config["datacenter"]
        name        = local.vsphere_config["host"]
        user        = local.vsphere_config["username"]
        password    = local.vsphere_config["password"]
        port        = local.vsphere_config["port"]
      }
      workspace {
        datacenter        = local.vsphere_config["datacenter"]
        server            = local.vsphere_config["host"]
        default_datastore = local.vsphere_config["datastore"]
        folder            = "vm/${local.vsphere_config["folder"]}"
      }
    }
  }

  authentication {
    strategy = "x509"
  }

  authorization {
    mode = "rbac"
  }

  network {
    plugin = "flannel"
  }

  ingress {
    provider = "none"
  }

  kubernetes_version = local.cluster_config["kubernetes_version"]

  dynamic "nodes" {
    for_each = module.nodes.nodes_ips
    iterator = nodeip
    content {
      address           = nodeip.value["ip_address"]
      internal_address  = nodeip.value["ip_address"]
      hostname_override = nodeip.value["name"]
      user              = module.nodes.ssh_username
      role              = local.cluster_config["cluster_nodes"][nodeip.key] == "c" ? ["controlplane", "etcd"] : (local.cluster_config["cluster_nodes"][nodeip.key] == "w" ? ["worker"] : ["controlplane", "etcd", "worker" ])
      ssh_key           = module.nodes.ssh_private_key
    }
  }
  addons = templatefile("${path.module}/files/storageclass.yaml", {})
}
ntkach commented 2 years ago

Yep we get the same problem, but with the ec2 flavor of the provider.

isaackuang commented 2 years ago

It seems like caused by ingress provider = "none" when I use

  ingress {
    provider = "none"
    http_port = 80
    https_port = 443
    network_mode = "hostPort"
  }

And then no changes.