Open awx-fuyuanchu opened 4 weeks ago
The version we are using is beta-private-cluster v31.0.0
Recently, we created several GKE clusters with monitoring_enable_managed_prometheus set to false.
false
However, the managed prometheus was enabled ignoring the flag monitoring_enable_managed_prometheus
monitoring_enable_managed_prometheus
This is the monitoring_config from the terraform state
monitoring_config { enable_components = [ "SYSTEM_COMPONENTS", "HPA", "POD", "DAEMONSET", "DEPLOYMENT", "STATEFULSET", "STORAGE", "CADVISOR", "KUBELET", ] advanced_datapath_observability_config { enable_metrics = false enable_relay = false relay_mode = "DISABLED" } managed_prometheus { enabled = true } }
The managed prometheus shouldn't be enabled when we set monitoring_enable_managed_prometheus to false
The managed prometheus is enabled when creating new clusters with monitoring_enable_managed_prometheus to false
module "gke" { source = "github.com/terraform-google-modules/terraform-google-kubernetes-engine.git//modules/beta-private-cluster?ref=v31.0.0" regional = true enable_private_nodes = true network_policy = var.network_policy create_service_account = false remove_default_node_pool = true node_metadata = "GKE_METADATA_SERVER" # This is to support workload identity enable_pod_security_policy = var.enable_pod_security_policy project_id = var.project_id name = local.cluster_name service_account = local.gke_service_account identity_namespace = local.identity_namespace region = var.region network = var.network subnetwork = var.subnetwork network_project_id = var.network_project_id ip_range_pods = var.ip_range_pods ip_range_services = var.ip_range_services enable_private_endpoint = var.enable_private_endpoint node_pools = var.node_pools node_pools_labels = var.node_pools_labels node_pools_oauth_scopes = var.node_pools_oauth_scopes node_pools_metadata = var.node_pools_metadata node_pools_taints = var.node_pools_taints node_pools_tags = local.node_pools_tags kubernetes_version = var.kubernetes_version master_ipv4_cidr_block = var.master_cidr_block default_max_pods_per_node = var.default_max_pods_per_node # Whether L4ILB Subsetting is enabled for this cluster. enable_l4_ilb_subsetting = var.enable_l4_ilb_subsetting # disable external access if we use the master's internal IP as the endpoint of the cluster master_authorized_networks = local.master_authorized_networks istio = var.enable_istio istio_auth = "AUTH_MUTUAL_TLS" cloudrun = var.enable_cloudrun release_channel = local.release_channel maintenance_start_time = var.maintenance_start_time maintenance_end_time = var.maintenance_end_time maintenance_recurrence = var.maintenance_recurrence maintenance_exclusions = var.maintenance_exclusions authenticator_security_group = var.authenticator_security_group logging_service = var.logging_service logging_enabled_components = var.logging_enabled_components cluster_autoscaling = local.cluster_autoscaling notification_config_topic = var.notification_config_topic workload_config_audit_mode = var.workload_config_audit_mode network_tags = var.auto_provisioning_network_tags security_posture_vulnerability_mode = var.security_posture_vulnerability_mode security_posture_mode = var.security_posture_mode firewall_priority = var.firewall_priority firewall_inbound_ports = var.firewall_inbound_ports cluster_resource_labels = local.cluster_resource_labels gce_pd_csi_driver = var.gce_pd_csi_driver datapath_provider = var.datapath_provider dns_cache = var.dns_cache monitoring_enable_managed_prometheus = false enable_resource_consumption_export = var.enable_resource_consumption_export resource_usage_export_dataset_id = var.resource_usage_export_dataset_id # set the output endpoint to the master's internal IP deploy_using_private_endpoint = var.deploy_using_private_endpoint # for workload cost insights enable_cost_allocation = var.enable_cost_allocation gateway_api_channel = var.gateway_api_channel deletion_protection = var.deletion_protection }
> terraform version Terraform v1.4.0 on darwin_amd64
No response
related to https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/pull/1894
ill try to create a fresh PR for this + test
TL;DR
The version we are using is beta-private-cluster v31.0.0
Recently, we created several GKE clusters with monitoring_enable_managed_prometheus set to
false
.However, the managed prometheus was enabled ignoring the flag
monitoring_enable_managed_prometheus
This is the monitoring_config from the terraform state
Expected behavior
The managed prometheus shouldn't be enabled when we set
monitoring_enable_managed_prometheus
tofalse
Observed behavior
The managed prometheus is enabled when creating new clusters with
monitoring_enable_managed_prometheus
tofalse
Terraform Configuration
Terraform Version
Additional information
No response