Closed vtstanescu closed 2 years ago
hi
did you tried to create some resources (volumes) on CVO just after creation? Did you used the correct connector id for each CVO?
The CVOs are in place and both have volumes, used -target option of Terraform as a workaround. One more thing, but I haven't got the chance to test it: I think this is happening when you actually have two or more connectors (like one for each environment).
you need to specify the correct client id of each connector in the client_id parameter
Do you agree with @edarzi answer? Or is there anything we need to do?
Hello, The client_id of each connector was specified for each (correct) CVO environment. Didn't have time to recreate the scenario, now using the same the same connector for both environments. The scenario to be reproduced is having two or more CVO environments managed by two or more connectors (eg. one connector-per-cvo)
Hello. Have same issue. When creating two CVO environment with two connectors in GCP (client_id = netapp-cloudmanager_connector_gcp.connector1.client_id for the first CVO and client_id = netapp-cloudmanager_connector_gcp.connector2.client_id for the second CVO) and fails woth "Cannot find working environment). Meanwhile when we are going to cloud manager we can see that both CVOs are connecting to 1 random connector.
When we crate 2 CVO environments in multiple terraform runs we got correct setup 1 CVO per 1 connector. But if we will trigger terraform destroy we are getting same error: "Cannot find working environment in the list"
do you specify the client id for the CVO creation? in order to indicate on which connector to create the CVO
yes. for sure: client_id = netapp-cloudmanager_connector_gcp.connector1.client_id for the first CVO and client_id = netapp-cloudmanager_connector_gcp.connector2.client_id for the second CVO
And in terraform state i can see that each CVO registered to their connector ID. But in Cloud Manager i see that both CVOs are connecting to one connector.
Can you please share your main file please?
//Connector1
resource "netapp-cloudmanager_connector_gcp" "connector1" {
provider = netapp-cloudmanager
name = var.connector_name
project_id = var.project_id
zone = var.zone
subnet_id = var.subnet_id
network_project_id = var.network_project_id
company = "NetApp"
service_account_email = var.service_account_email
service_account_path = "/tmp/secret/netapp-cloudmgr-sa.json"
account_id = var.account_id
associate_public_ip = false
}
//Connector2
resource "netapp-cloudmanager_connector_gcp" "connector2" {
provider = netapp-cloudmanager
name = var.connector_name
project_id = var.project_id
zone = var.zone
subnet_id = var.subnet_id
network_project_id = var.network_project_id
company = "NetApp"
service_account_email = var.service_account_email
service_account_path = "/tmp/secret/netapp-cloudmgr-sa.json"
account_id = var.account_id
associate_public_ip = false
}
//CVO HA pair 1 pointed to connector1 client_id
resource "netapp-cloudmanager_cvo_gcp" "cvoha1" {
count = var.cvoha_count
provider = netapp-cloudmanager
name = var.cvoha_name
project_id = var.project_id
zone = var.zone
subnet_id = var.subnet_id
gcp_service_account = var.ha_service_account_email
svm_password = data.vault_generic_secret.secretpath.data[var.svm_password]
client_id = netapp-cloudmanager_connector_gcp.connector1.client_id
workspace_id = var.workspace_id
gcp_volume_size = var.gcp_volume_size
gcp_volume_size_unit = var.gcp_volume_size_unit
gcp_volume_type = var.gcp_volume_type //['pd-balanced', 'pd-standard', 'pd-ssd']
instance_type = var.instance_type
license_type = var.license_type_ha
is_ha = true
node1_zone = var.node1_zone
node2_zone = var.node2_zone
mediator_zone = var.mediator_zone
vpc0_node_and_data_connectivity = var.vpc0
subnet0_node_and_data_connectivity = var.subnet0
vpc0_firewall_rule_name = var.fw_rule0
vpc1_cluster_connectivity = var.vpc1
subnet1_cluster_connectivity = var.subnet1
vpc2_ha_connectivity = var.vpc2
subnet2_ha_connectivity = var.subnet2
vpc3_data_replication = var.vpc3
subnet3_data_replication = var.subnet3
}
//CVO HA pair 2 pointed to connector2 client_id
resource "netapp-cloudmanager_cvo_gcp" "cvoha2" {
count = var.cvoha_count
provider = netapp-cloudmanager
name = var.cvoha_name
project_id = var.project_id
zone = var.zone
subnet_id = var.subnet_id
gcp_service_account = var.ha_service_account_email
svm_password = data.vault_generic_secret.secretpath.data[var.svm_password]
client_id = netapp-cloudmanager_connector_gcp.connector2.client_id
workspace_id = var.workspace_id
gcp_volume_size = var.gcp_volume_size
gcp_volume_size_unit = var.gcp_volume_size_unit
gcp_volume_type = var.gcp_volume_type //['pd-balanced', 'pd-standard', 'pd-ssd']
instance_type = var.instance_type
license_type = var.license_type_ha
is_ha = true
node1_zone = var.node1_zone
node2_zone = var.node2_zone
mediator_zone = var.mediator_zone
vpc0_node_and_data_connectivity = var.vpc0
subnet0_node_and_data_connectivity = var.subnet0
vpc0_firewall_rule_name = var.fw_rule0
vpc1_cluster_connectivity = var.vpc1
subnet1_cluster_connectivity = var.subnet1
vpc2_ha_connectivity = var.vpc2
subnet2_ha_connectivity = var.subnet2
vpc3_data_replication = var.vpc3
subnet3_data_replication = var.subnet3
}
This is a slightly simplified main.tf file
from the backend logs i can see that each CVO is created on different connector as expected, see personal mail with more details
Thank you. Resolved in 22.2.0.
fixed by 755575c9af012ead22d2f6a06dd8822093dccd07
When managing multiple CVO working environments (tested for AWS) within a single Terraform run (statefile), the provider fails to find the working environments. Setup: 2 AWS Regions (1x NetApp AWS Connector & 1x NetApp CVO AWS) per region
Not adding a debug log as having this in a production workload, but can be easily reproduced. Feels like an issue that can be caused by global variables/objects within the provider. Haven't tried creating multiple environments using the same NetApp Connector.