Closed vburckhardt closed 3 years ago
@hkantare @kavya498 Can you help get this merged please?
We are reviewing PR and will be part of next week terraform provider release
@ocofaigh we reviewed the PR its creating a new folder for each action...Lets say if user run terraform plan or apply multiple times then it will have its own directory created ..This way the folder structures gets added in each run and doesn't get cleaned up
Instead of generating a new folder I think the better approcah is to add mutex (terraform mutex) on datasource something similar where we have lock on cluster id so that the provider executes the datasources serially https://github.com/IBM-Cloud/terraform-provider-ibm/blob/21d604abd854c46e3c61626fbe72f1c22962f07d/ibm/resource_ibm_is_instance_group_manager_policy.go#L132
@hkantare This is intended behaviour, otherwise concurrent data lookups on the same cluster will fail. What seems to happen with the current behaviour is:
When this process happens concurrently, you get the error: Error downloading the cluster config [c3en2rss08p3fuce3dk0]: open /Users/conall/3b823a3d3a844d109ac91ffba785f0b3eef39cb03dd513926c4e396a81f755e7_c3en2rss08p3fuce3dk0_k8sconfig/config.zip: no such file or directory
I guess another approach could be to add a lock on that process so it cannot be executed concurrently?
I think we will go with another approcah to add the mutex from Terraform level https://github.com/IBM-Cloud/terraform-provider-ibm/blob/b6b22f98829ac15c46b6c96166d52a5af4313067/ibm/internal/mutexkv/mutexkv.go
unc dataSourceIBMContainerClusterConfigRead(d *schema.ResourceData, meta interface{}) error {
csClient, err := meta.(ClientSession).VpcContainerAPI()
if err != nil {
return err
}
csAPI := csClient.Clusters()
name := d.Get("cluster_name_id").(string)
download := d.Get("download").(bool)
admin := d.Get("admin").(bool)
configDir := d.Get("config_dir").(string)
network := d.Get("network").(bool)
clusterId := "Cluster_Config_" + name
ibmMutexKV.Lock(clusterId)
defer ibmMutexKV.Unlock(clusterId)
-------
we are looking in above aproach
No need for this PR anymore - it can be closed since the fix is in https://github.com/IBM-Cloud/terraform-provider-ibm/pull/3264 which is in v1.35.0
closing the PR
Related to https://github.com/IBM-Cloud/terraform-provider-ibm/issues/2806