hashicorp / terraform-provider-google

Terraform Provider for Google Cloud Platform
https://registry.terraform.io/providers/hashicorp/google/latest/docs
Mozilla Public License 2.0
2.33k stars 1.73k forks source link

GKE Node Pool doesn't get reassigned to new GKE cluster #5856

Closed mattysweeps closed 4 years ago

mattysweeps commented 4 years ago

Community Note

Terraform Version

Using the concourse resource ljfranklin/terraform-resource

Terraform v0.12.23
+ provider.google v2.10.0
+ provider.google-beta v3.11.0
+ provider.null v2.1.2

Affected Resource(s)

Terraform Configuration Files

data "google_service_account" "nodepool-sa" {
  account_id = "${var.gke_cluster_sa}"
}
resource "google_container_cluster" "primary" {
    provider = "google-beta"
    name     = "${var.environment}-cluster"
    location = var.primary_region
    description = "The Kubernetes cluster for ${var.environment} environment"
    min_master_version = "${var.minimum_master_version}"
    ## Network
    network = var.vpc_link
    subnetwork = var.private_subnet_1_link
    ip_allocation_policy {
        # use_ip_aliases = true
    }
    ## Monitoring and Logging
    logging_service = "logging.googleapis.com/kubernetes"
    monitoring_service = "monitoring.googleapis.com/kubernetes"
    maintenance_policy {
        daily_maintenance_window {
            start_time = "22:00"
        }
    }
    ## to create and destroy immediately
    remove_default_node_pool = true
    initial_node_count = 1
    ## config
    addons_config {
        http_load_balancing {
            disabled = false
        }
        horizontal_pod_autoscaling {
            disabled = false
        }
        istio_config {
            disabled = false
            auth = "AUTH_NONE"
        }
        cloudrun_config {
            disabled = false
        }
    }
}

resource "google_container_node_pool" "primary" {
    name        = "${var.environment}-application-nodes"
    location    = var.primary_region
    cluster     = "${google_container_cluster.primary.name}"
    node_config {
        machine_type = "n1-standard-1"
        image_type = "COS"
        disk_type = "pd-standard"
        service_account = data.google_service_account.nodepool-sa.email
        metadata = {
            disable-legacy-endpoints = "true"
        }
        oauth_scopes = [
            "https://www.googleapis.com/auth/logging.write",
            "https://www.googleapis.com/auth/monitoring",
            "https://www.googleapis.com/auth/devstorage.read_write",
            "https://www.googleapis.com/auth/servicecontrol",
            "https://www.googleapis.com/auth/service.management.readonly",
            "https://www.googleapis.com/auth/trace.append",
        ]
    }
    management {
        auto_repair = true
        auto_upgrade = "${var.primary_node_auto_upgrade}"
    }
    initial_node_count = "${var.primary_node_initial_node_count}"
    autoscaling {
        min_node_count = "${var.primary_node_autoscaling_min_node_count}"
        max_node_count = "${var.primary_node_autoscaling_max_node_count}"
    }
}

Debug Output

Panic Output

Expected Behavior

We manually deleted the K8s cluster and reran the concourse pipeline. We expected the cluster to come back healthy. (the concourse terraform resource does a terraform apply operation)

Actual Behavior

The k8s cluster was recreated, but the node pool was not recreated/linked, so the new K8s cluster had no node pools.

Steps to Reproduce

  1. terraform apply
  2. Manually delete k8s cluster
  3. terraform apply

Important Factoids

References

edwardmedia commented 4 years ago

@mattysweeps I can´t repro the issue. I used your HCL to create a cluster, then manually deleted it. Waited a little bit time as some resources were still being cleaned up. Then ran tf plan, I did see the differences available. After I applied the changes, I saw the cluster was successfully recreated. Per matt´s suggestion, I am closing this issue

ghost commented 4 years ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!