Open qnix-databricks opened 1 month ago
Adding autoscale doesn't appear to change the num_workers = 0
behavior:
resource "databricks_cluster" "dev_shared_clusters" {
for_each = module.shared_cluster_policies_dev_developer.policies
cluster_name = "Dev Shared ${each.value.cluster_policy.name}"
spark_version = data.databricks_spark_version.latest_lts.id
policy_id = each.value.cluster_policy.id
apply_policy_default_values = false
autoscale {
min_workers = 1
max_workers = 50
}
}
terraform plan
# databricks_cluster.dev_shared_clusters["small"] will be created + resource "databricks_cluster" "dev_shared_clusters" { + apply_policy_default_values = false + autotermination_minutes = 60 + cluster_id = (known after apply) + cluster_name = "Dev Shared shared - qta_dev - small" + default_tags = (known after apply) + driver_instance_pool_id = (known after apply) + driver_node_type_id = (known after apply) + enable_elastic_disk = (known after apply) + enable_local_disk_encryption = (known after apply) + id = (known after apply) + node_type_id = (known after apply) + num_workers = 0 + policy_id = "00002C211A3A8831" + spark_version = "15.4.x-scala2.12" + state = (known after apply) + url = (known after apply)
+ autoscale {
+ max_workers = 50
+ min_workers = 1
}
}
Configuration
Please find the module definitions in the attached archive: modules.tgz
Expected Behavior
Should be able to create cluster from the Shared cluster policy
Actual Behavior
Noted that the code does not specify num_workers. I tried with both setting apply policy default to true and false, and not specify it at all.
Here is the output of
terraform plan
:I noted that when create the cluster manually in the UI using the same Shared policy, it does not automatically add
num_workers = 0
, and it can create the cluster without issue.Steps to Reproduce
terraform apply
(the module code is attached.)
Terraform and provider versions
% tf version Terraform v1.9.7 on darwin_arm64
Is it a regression?
Debug Output
Important Factoids
Would you like to implement a fix?