Open erictune opened 4 months ago
Hi @erictune!
To reproduce this issue please share the complete configuration of your google_container_cluster
, for sensitive data you could just specify in a brief comment like this #provided
This resource was not creating using terraform. I am trying to import it. So, I'll provide both the gcloud describe and the terraform import generated config.
Here is the gcloud output, from gcloud --format=json container clusters describe cluster-1 --location us-central1
{
"addonsConfig": {
"dnsCacheConfig": {},
"gcePersistentDiskCsiDriverConfig": {
"enabled": true
},
"horizontalPodAutoscaling": {},
"httpLoadBalancing": {},
"kubernetesDashboard": {
"disabled": true
},
"networkPolicyConfig": {
"disabled": true
}
},
"authenticatorGroupsConfig": {},
"autoscaling": {
"autoscalingProfile": "BALANCED"
},
"clusterIpv4Cidr": "10.60.0.0/14",
"createTime": "2021-08-23T19:26:56+00:00",
"currentMasterVersion": "1.27.13-gke.1000000",
"currentNodeCount": 9,
"currentNodeVersion": "1.27.13-gke.1000000",
"databaseEncryption": {
"state": "DECRYPTED"
},
"defaultMaxPodsConstraint": {
"maxPodsPerNode": "110"
},
"endpoint": "35.232.165.174",
"enterpriseConfig": {
"clusterTier": "ENTERPRISE"
},
"etag": "1054dc57-c518-49a4-9a7b-9b19accf1366",
"fleet": {
"membership": "//gkehub.googleapis.com/projects/608501761235/locations/global/memberships/cluster-1",
"preRegistered": true,
"project": "608501761235"
},
"id": "6493a2b053284f339a8ef5a510eb7fd84b297ace3a924246a41dea587899ee03",
"initialClusterVersion": "1.20.8-gke.900",
"instanceGroupUrls": [
"https://www.googleapis.com/compute/v1/projects/eric-tune-7/zones/us-central1-c/instanceGroupManagers/gke-cluster-1-default-pool-8ab0460f-grp",
"https://www.googleapis.com/compute/v1/projects/eric-tune-7/zones/us-central1-b/instanceGroupManagers/gke-cluster-1-default-pool-fe8c61e0-grp",
"https://www.googleapis.com/compute/v1/projects/eric-tune-7/zones/us-central1-d/instanceGroupManagers/gke-cluster-1-default-pool-4764333b-grp"
],
"ipAllocationPolicy": {
"clusterIpv4Cidr": "10.60.0.0/14",
"clusterIpv4CidrBlock": "10.60.0.0/14",
"clusterSecondaryRangeName": "gke-cluster-1-pods-6493a2b0",
"defaultPodIpv4RangeUtilization": 0.0088,
"servicesIpv4Cidr": "10.64.0.0/20",
"servicesIpv4CidrBlock": "10.64.0.0/20",
"servicesSecondaryRangeName": "gke-cluster-1-services-6493a2b0",
"stackType": "IPV4",
"useIpAliases": true
},
"labelFingerprint": "a9dc16a7",
"legacyAbac": {},
"location": "us-central1",
"locations": [
"us-central1-c",
"us-central1-b",
"us-central1-d"
],
"loggingConfig": {
"componentConfig": {
"enableComponents": [
"SYSTEM_COMPONENTS",
"WORKLOADS"
]
}
},
"loggingService": "logging.googleapis.com/kubernetes",
"maintenancePolicy": {
"resourceVersion": "e3b0c442"
},
"masterAuth": {
"clientCertificateConfig": {},
"clusterCaCertificate": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLakNDQWhLZ0F3SUJBZ0lRVzV5V0JESUI3eDg0RTFDVEVLSzdLakFOQmdrcWhraUc5dzBCQVFzRkFEQXYKTVMwd0t3WURWUVFERXlSalpEZzRZelpoWWkxak9XUTNMVFJqT0dNdE9UUTFOQzA1TUdGa1lqbG1aR0ptTldVdwpIaGNOTWpFd09ESXpNVGd5TmpVMldoY05Nall3T0RJeU1Ua3lOalUyV2pBdk1TMHdLd1lEVlFRREV5UmpaRGc0Cll6WmhZaTFqT1dRM0xUUmpPR010T1RRMU5DMDVNR0ZrWWpsbVpHSm1OV1V3Z2dFaU1BMEdDU3FHU0liM0RRRUIKQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUNJczI3T0pQTC9VSmlERFZkUzZvNVJqZ2FNQUhSMjJqSjI0RG9MbnNqcApvS1BhRGdiN21rZTdON3dlVnlvdWFtdnJEcWJBcTVIdWdCZnFQSzBxanQ0OUp3WGhSOWwvM1JPVm5icHhseE1oCkl4Q1FQZTBXRS9PWHpXV0VDbXdXSXBMRUFjOXk2VmI1Vkh6MFhDQ1FxTGFteTZzVllxays1dndLRHlycjlBRWwKYjdVNUd4Zmh1dWNEeEJpTGEva0cwRVdxS0s4V1JrdVlHcFVETUdyVFowRnMyR2cwc0FjSkFDRWdmRW9NVkNYVgpDMWxEaVAvcHk5bUVWTVFnU0d0OWpWTGVBWUNlb245SnY5L0hCaHRycXRQM0gxNFVwYmgreW9paFdhamY4MG43Ck9VQTVvR1F0cUJac1YzSTFQRjdjSzNhQ3VkVXVXaEFjVHZaOTN0L2l0SDdyQWdNQkFBR2pRakJBTUE0R0ExVWQKRHdFQi93UUVBd0lDQkRBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTQTZVbUZtTkNTa2Z4Lwo3TzNvMHlrQzZVRWZjVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBSEdRSDBuc2NSUkZBOE93alpkVDZIQzNWCis1U3RxNk9GOWphb3cwYktCY0ZEMDVWM0gvL3Vkem9uUU52Vm9HZTd0TGNhL0EyZGlOdEt1SHFUY3FPSGdjZGsKMldKbklScXhIR0gzak9PclBYTEFUQmorWEE0M25TNXJEWmQ2N0lGdG01cExnU0drbkNpWnNEWFJTZWZFNy85Two5QnBXU3lBYnZmM3IzeUtBSmgwVFhxWDlVb2hiNnBHUkJhYlVoN09BUFEzYUR3T2ZZNjZELzcxRmRLTkVFTDNRCnZYM04zZkJ6OVo0L1FOaW9rMnJWVW9ldGdQKzkxNnMrbXNYT2JUOHdTRlhybTJGd3NjQU1Xb1p5MUhXVTJRdG4KQXdXZzBEakF3cjdEQUVTdEd2cnRIbXFDemNVY1QyNXJ4QzFHVkFJWTh1enBzMUxFMWJ5UjBKRTZkVlZZZlE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
},
"masterAuthorizedNetworksConfig": {
"gcpPublicCidrsAccessEnabled": true
},
"monitoringConfig": {
"componentConfig": {
"enableComponents": [
"SYSTEM_COMPONENTS"
]
},
"managedPrometheusConfig": {
"enabled": true
}
},
"monitoringService": "monitoring.googleapis.com/kubernetes",
"name": "cluster-1",
"network": "default",
"networkConfig": {
"datapathProvider": "LEGACY_DATAPATH",
"defaultSnatStatus": {},
"network": "projects/eric-tune-7/global/networks/default",
"serviceExternalIpsConfig": {
"enabled": true
},
"subnetwork": "projects/eric-tune-7/regions/us-central1/subnetworks/default"
},
"nodeConfig": {
"diskSizeGb": 100,
"diskType": "pd-standard",
"imageType": "COS_CONTAINERD",
"machineType": "e2-medium",
"metadata": {
"disable-legacy-endpoints": "true"
},
"oauthScopes": [
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/trace.append"
],
"serviceAccount": "default",
"shieldedInstanceConfig": {
"enableIntegrityMonitoring": true
}
},
"nodePoolDefaults": {
"nodeConfigDefaults": {}
},
"nodePools": [
{
"autoscaling": {},
"config": {
"diskSizeGb": 100,
"diskType": "pd-standard",
"imageType": "COS_CONTAINERD",
"machineType": "e2-medium",
"metadata": {
"disable-legacy-endpoints": "true"
},
"oauthScopes": [
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/trace.append"
],
"serviceAccount": "default",
"shieldedInstanceConfig": {
"enableIntegrityMonitoring": true
}
},
"etag": "6c450ad5-52af-4193-959c-d80500ba4ab4",
"initialNodeCount": 3,
"instanceGroupUrls": [
"https://www.googleapis.com/compute/v1/projects/eric-tune-7/zones/us-central1-c/instanceGroupManagers/gke-cluster-1-default-pool-8ab0460f-grp",
"https://www.googleapis.com/compute/v1/projects/eric-tune-7/zones/us-central1-b/instanceGroupManagers/gke-cluster-1-default-pool-fe8c61e0-grp",
"https://www.googleapis.com/compute/v1/projects/eric-tune-7/zones/us-central1-d/instanceGroupManagers/gke-cluster-1-default-pool-4764333b-grp"
],
"locations": [
"us-central1-c",
"us-central1-b",
"us-central1-d"
],
"management": {
"autoRepair": true,
"autoUpgrade": true
},
"maxPodsConstraint": {
"maxPodsPerNode": "110"
},
"name": "default-pool",
"networkConfig": {
"podIpv4CidrBlock": "10.60.0.0/14",
"podIpv4RangeUtilization": 0.0088,
"podRange": "gke-cluster-1-pods-6493a2b0"
},
"podIpv4CidrSize": 24,
"selfLink": "https://container.googleapis.com/v1/projects/eric-tune-7/locations/us-central1/clusters/cluster-1/nodePools/default-pool",
"status": "RUNNING",
"upgradeSettings": {
"maxSurge": 1,
"strategy": "SURGE"
},
"version": "1.27.13-gke.1000000"
}
],
"notificationConfig": {
"pubsub": {}
},
"privateClusterConfig": {
"privateEndpoint": "10.128.15.229",
"publicEndpoint": "#redacted"
},
"releaseChannel": {
"channel": "REGULAR"
},
"selfLink": "https://container.googleapis.com/v1/projects/eric-tune-7/locations/us-central1/clusters/cluster-1",
"servicesIpv4Cidr": "10.64.0.0/20",
"shieldedNodes": {
"enabled": true
},
"status": "RUNNING",
"subnetwork": "default",
"verticalPodAutoscaling": {
"enabled": true
},
"zone": "us-central1"
}
And here is the generated terraform:
# __generated__ by Terraform
# Please review these resources and move them into your main configuration files.
# __generated__ by Terraform
resource "google_container_cluster" "cluster_1" {
allow_net_admin = null
cluster_ipv4_cidr = "10.60.0.0/14"
datapath_provider = "LEGACY_DATAPATH"
default_max_pods_per_node = 110
deletion_protection = true
description = null
enable_autopilot = null
enable_cilium_clusterwide_network_policy = false
enable_intranode_visibility = false
enable_kubernetes_alpha = false
enable_l4_ilb_subsetting = false
enable_legacy_abac = false
enable_shielded_nodes = true
enable_tpu = false
initial_node_count = 0
location = "us-central1"
logging_service = "logging.googleapis.com/kubernetes"
min_master_version = null
monitoring_service = "monitoring.googleapis.com/kubernetes"
name = "cluster-1"
network = "projects/eric-tune-7/global/networks/default"
networking_mode = "VPC_NATIVE"
node_locations = ["us-central1-b", "us-central1-c", "us-central1-d"]
node_version = "1.27.13-gke.1000000"
private_ipv6_google_access = null
project = "eric-tune-7"
remove_default_node_pool = null
resource_labels = {}
subnetwork = "projects/eric-tune-7/regions/us-central1/subnetworks/default"
addons_config {
dns_cache_config {
enabled = false
}
gce_persistent_disk_csi_driver_config {
enabled = true
}
horizontal_pod_autoscaling {
disabled = false
}
http_load_balancing {
disabled = false
}
network_policy_config {
disabled = true
}
}
authenticator_groups_config {
security_group = ""
}
cluster_autoscaling {
autoscaling_profile = "BALANCED"
enabled = false
}
database_encryption {
key_name = null
state = "DECRYPTED"
}
default_snat_status {
disabled = false
}
fleet {
project = "608501761235"
}
ip_allocation_policy {
cluster_ipv4_cidr_block = "10.60.0.0/14"
cluster_secondary_range_name = "gke-cluster-1-pods-6493a2b0"
services_ipv4_cidr_block = "10.64.0.0/20"
services_secondary_range_name = "gke-cluster-1-services-6493a2b0"
stack_type = "IPV4"
}
logging_config {
enable_components = ["SYSTEM_COMPONENTS", "WORKLOADS"]
}
master_auth {
client_certificate_config {
issue_client_certificate = false
}
}
monitoring_config {
enable_components = ["SYSTEM_COMPONENTS"]
managed_prometheus {
enabled = true
}
}
network_policy {
enabled = false
provider = "PROVIDER_UNSPECIFIED"
}
node_config {
boot_disk_kms_key = null
disk_size_gb = 100
disk_type = "pd-standard"
enable_confidential_storage = false
guest_accelerator = []
image_type = "COS_CONTAINERD"
labels = {}
local_ssd_count = 0
logging_variant = "DEFAULT"
machine_type = "e2-medium"
metadata = {
disable-legacy-endpoints = "true"
}
min_cpu_platform = null
node_group = null
oauth_scopes = ["https://www.googleapis.com/auth/devstorage.read_only", "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/monitoring", "https://www.googleapis.com/auth/service.management.readonly", "https://www.googleapis.com/auth/servicecontrol", "https://www.googleapis.com/auth/trace.append"]
preemptible = false
resource_labels = {}
resource_manager_tags = {}
service_account = "default"
spot = false
tags = []
shielded_instance_config {
enable_integrity_monitoring = true
enable_secure_boot = false
}
}
node_pool {
initial_node_count = 3
max_pods_per_node = 110
name = "default-pool"
name_prefix = null
node_count = 3
node_locations = ["us-central1-b", "us-central1-c", "us-central1-d"]
version = "1.27.13-gke.1000000"
management {
auto_repair = true
auto_upgrade = true
}
network_config {
create_pod_range = false
enable_private_nodes = false
pod_ipv4_cidr_block = "10.60.0.0/14"
pod_range = "gke-cluster-1-pods-6493a2b0"
}
node_config {
boot_disk_kms_key = null
disk_size_gb = 100
disk_type = "pd-standard"
enable_confidential_storage = false
guest_accelerator = []
image_type = "COS_CONTAINERD"
labels = {}
local_ssd_count = 0
logging_variant = "DEFAULT"
machine_type = "e2-medium"
metadata = {
disable-legacy-endpoints = "true"
}
min_cpu_platform = null
node_group = null
oauth_scopes = ["https://www.googleapis.com/auth/devstorage.read_only", "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/monitoring", "https://www.googleapis.com/auth/service.management.readonly", "https://www.googleapis.com/auth/servicecontrol", "https://www.googleapis.com/auth/trace.append"]
preemptible = false
resource_labels = {}
resource_manager_tags = {}
service_account = "default"
spot = false
tags = []
shielded_instance_config {
enable_integrity_monitoring = true
enable_secure_boot = false
}
}
upgrade_settings {
max_surge = 1
max_unavailable = 0
strategy = "SURGE"
}
}
node_pool_defaults {
node_config_defaults {
logging_variant = "DEFAULT"
}
}
notification_config {
pubsub {
enabled = false
topic = null
}
}
Confirmed issue!
With a basic configuration or with the user code it returns the next error with terraform apply
or just importing the resour issue!
With a basic configuration or with the user code it returns the next error with terraform apply
or just importing the resource:
│ Error: Conflicting configuration arguments
│
│ with google_container_cluster.cluster_1,
│ on generated.tf line 1:
│ (source code not available)
│
│ "ip_allocation_policy": conflicts with cluster_ipv4_cidr
╵
╷
│ Error: Conflicting configuration arguments
│
│ with google_container_cluster.cluster_1,
│ on generated.tf line 2:
│ (source code not available)
│
│ "cluster_ipv4_cidr": conflicts with ip_allocation_policyce:
│ Error: Conflicting configuration arguments
│
│ with google_container_cluster.cluster_1,
│ on generated.tf line 1:
│ (source code not available)
│
│ "ip_allocation_policy": conflicts with cluster_ipv4_cidr
╵
╷
│ Error: Conflicting configuration arguments
│
│ with google_container_cluster.cluster_1,
│ on generated.tf line 2:
│ (source code not available)
│
│ "cluster_ipv4_cidr": conflicts with ip_allocation_policy
A member of the GKE network team suggested to go through this issue https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/issues/349#issuecomment-1746753427
Community Note
Terraform Version & Provider Version(s)
Terraform v1.5.7 on linux_amd64
Affected Resource(s)
google_container_cluster
Terraform Configuration
Debug Output
https://gist.github.com/erictune/cdb5e96ee0828fc6cb01b2556abd008c
Expected Behavior
Import should not have produced these error messages about conflicting fields. If a resource exists, then in principle it should be possible to generate terraform that creates the same resource, right?
I think that in this case, the ip_allocation_policy block could have been left unspecified. Then the generated config would match my cluster.
Or maybe cluster_ipv4_cidr should be unset.
Actual Behavior
generated.tf
was generated.ip_allocation_block
was safe in this case.Steps to reproduce
Get read access to my project by contacting me via corporate gmail or chat. Try to import the cluster resource by running
terraform plan -generate-config-out=generated.tf
Important Factoids
My cluster's createTime is 2021:
Therefore, it may predate the ip_allocation_policy block.
References
No response
b/344948541