Closed gerardbonneto closed 2 years ago
Hello,
I think this issue is related to the FlexibleEngine API Backend not the terraform provider. As terraform is just sending the request to the API.
@qukuijin1989 can you please check this issue and confirm if it is related to the already identified "nodepool" (certificat) issue on Flexible Engine backend side ?
Hello @qukuijin1989 our prospect Gerard Bonneto wait a answer since 2 weeks now.....
hi @EmmanuelB28 the problem is that random nodepool doesn't setup the node in distribute AZ right?
Hello, Yes, this is the issue. When I add a node to a "random_AZ" cluster from the Flexibleengine portal, it works as expected: the node is created in a ramdom AZ. Whereas, with terraform, all the node are located in the same AZ (AZ1).
@qukuijin1989 It is important that our client Gérard can have a precise and fast answer on his request because today his project is blocked thank's a lot for your help
hi @EmmanuelB28 for the nodepool (with random_az), when user create several nodes in one time, all your node (which create in the same time) wil be placed in the same az (random one of the 3 az) for example: user create 5 node at the same time the 5 node will all be placed in AZ1, next time create 5 nodes at the same time, it may be placed all in AZ2
@qukuijin1989, this is exactly my concerns. So what are your recommendations for building an HA CCE cluster with terraform only ?
HA cluster, for master you can have a 3 node in different AZ, so it's HA for node pool when you create node can have a set manual to point the AZ, also i will check with R&D if they can delivery a feature to do it automatic
Hello @gerardbonneto,
As a workaround you can also use the CCE terraform module to create a multi-az cluster and attach node_pool_list (or node_list) with different az for each element of the list. Example in the readme.
I'm going to close this issue because there are no updates for 20 days. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Hi there,
Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.
Terraform Version
Terraform v1.0.8. on linux_amd64
Affected Resource(s)
Please list the resources as a list, for example:
Expected Behavior
I have created a node pool within a multi-AZ cluster. The nodepool has been specified with a the availability_zone param set to "random".
What should have happened? The nodes should be randomly and evenly distributed among the cluster AZs, as specified in the documentation.
Actual Behavior
All the nodes are located in the same AZ.
Steps to Reproduce
module.cluster.module.cce_cluster.flexibleengine_cce_cluster_v3.cluster_k8s will be created
resource "flexibleengine_cce_cluster_v3" "cluster_k8s" {
authentication_mode = "rbac"
billing_mode = (known after apply)
certificate_clusters = (known after apply)
certificate_users = (known after apply)
cluster_type = "VirtualMachine"
cluster_version = "v1.19.8-r0"
container_network_cidr = (known after apply)
container_network_type = "vpc-router"
description = "Trustlane Infra"
eip = (known after apply)
extend_param = {
external_apig_endpoint = (known after apply)
external_endpoint = (known after apply)
flavor_id = "cce.s2.large"
highway_subnet_id = (known after apply)
id = (known after apply)
internal_endpoint = (known after apply)
kube_proxy_mode = "ipvs"
name = "staging"
region = "eu-west-0"
security_group_id = (known after apply)
status = (known after apply)
subnet_id = (known after apply)
vpc_id = (known after apply)
masters {
masters {
masters {
module.cluster.module.cce_nodepool["nodepool0"].flexibleengine_cce_node_pool_v3.nodepool will be created
resource "flexibleengine_cce_node_pool_v3" "nodepool" {
availability_zone = "random"
billing_mode = (known after apply)
cluster_id = (known after apply)
flavor_id = "s3.2xlarge.2"
id = (known after apply)
initial_node_count = 6
key_pair = "KeyPair-staging"
labels = {
max_node_count = 12
min_node_count = 6
name = "cluster-general-purpose"
os = "EulerOS 2.5"
priority = 1
region = (known after apply)é
scale_down_cooldown_time = 60
scall_enable = true
status = (known after apply)
subnet_id = (known after apply)
type = "vm"
data_volumes {
root_volume {