Closed harrisonschueler closed 1 year ago
This happens because we choose nodes with no local_disk
by default. p2.xlarge
has no local disk but g4dn_xlarge
has 125GB local SSD
https://github.com/databricks/terraform-provider-databricks/blob/master/clusters/data_node_type.go#L43
@harrisonschueler the below block should work
data "databricks_node_type" "g4dn_xlarge" {
category = "GPU Accelerated"
min_memory_gb = "16"
min_cores = "4"
min_gpus = "1"
local_disk = true
}
Changes to Outputs:
+ g4dn_xlarge_id = "g4dn.xlarge"
Yes, I already identified this, but it makes sense to fix it - why number of local disks will be more preferable than memory when selecting nodes...
because you can specify local_disk
parameter 😄
it should be fixed in #1856 - can you validate it using the current master?
Configuration
data "databricks_node_type" "g4dn_xlarge"{ category = "GPU Accelerated" min_memory_gb = "16" min_cores="4" min_gpus = "1" }
data "databricks_spark_version" "runtime_11_0_ml" { ml=true gpu=true scala = "2.12" spark_version = "3.3.0" }
resource "databricks_cluster" "research" { cluster_name = "Research Cluster" spark_version = data.databricks_spark_version.runtime_11_0_ml.id node_type_id = "g4dn_xlarge" autotermination_minutes = 20 autoscale { min_workers = 1 max_workers = 50 }
Expected Behavior
A cluster with the node type g4dn_xlarge should be created
Actual Behavior
Creates a p2.xlarge node.
Steps to Reproduce
Use above configuration to create a cluster with the node type specified.
Terraform and provider versions
1.3.0
Debug Output
Important Factoids
changing "node_type_id = data.databricks_node_type.g4dn_xlarge.id" to "node_type_id = "g4dn_xlarge" it creates the correct node type