Closed szihai closed 1 year ago
Looks like you're not creating the cluster as private. Make sure you're setting private_cluster_config
correctly in the cluster module (not the nodepool).
Thank you @juliocc. I looked at my cluster configuration. It does set the enable_private_nodes to true. Does it have to disable public endpoint? "private_cluster_config": [ { "enable_private_endpoint": true, "enable_private_nodes": true, "master_global_access_config": [ { "enabled": true } ], "master_ipv4_cidr_block": "10.126.6.128/28", "peering_name": "gke-ndxxx-peer", "private_endpoint": "10.126.6.130", "private_endpoint_subnetwork": "", "public_endpoint": "public ip" } ],
Hi folks, is there any suggestions?
I'll try to take a look later today
I tried a few things on my end and couldn't reproduce this. Can you share the code you're using for both the cluster and nodepool?
Here is the GKE module. It was created 6 months ago.
module "gke" {
source = "terraform-google-modules/kubernetes-engine/google//modules/beta-private-cluster"
version = "~> 23.0.0"
...
}
The added nodepool code was pasted in earlier comment.
You're mixing CFT (for the cluster) and Fabric (for the nodepool).
I recommend you either switch the cluster to use the Fabric GKE module or create the nodepool in however way the CFT GKE module recommends. Mixing both is not a good idea.
I need to add a nodepool to a private gke cluster with shared vpc and private network. The existing nodepool works fine as in the gke module I have specified enable_private_nodes= true. However, when add the additional nodepool, it keeps throwing this error:
When looking the plan, it does seem to set enable_private_nodes.
But in the nodepool module there is no place for it. Wondering how to get around this issue. Here is my nodepool configuration: