Open leilifremont opened 12 months ago
Thanks for raising this issue. Seems it's supported by API. For more usage problem, suggest to file the issue on TF community form https://discuss.hashicorp.com/c/terraform-providers/tf-azure/34.
resource "azurerm_cosmosdb_postgresql_cluster" "test" {
name = "acctestclustertest02"
resource_group_name = azurerm_resource_group.test.name
location = azurerm_resource_group.test.location
administrator_login_password = "G@Sh1DpR4!"
coordinator_storage_quota_in_mb = 262144
coordinator_vcore_count = 4
node_count = 2
citus_version = "12.1"
coordinator_public_ip_access_enabled = false
ha_enabled = true
coordinator_server_edition = "MemoryOptimized"
maintenance_window {
day_of_week = 1
start_hour = 9
start_minute = 1
}
node_public_ip_access_enabled = true
node_server_edition = "GeneralPurpose"
sql_version = "16"
preferred_primary_zone = 2
node_storage_quota_in_mb = 262144
node_vcores = 4
shards_on_coordinator_enabled = false
tags = {
Env = "Test2"
}
}
HI @neil-yechenwei,
I double checked on my end, yes, this setting (262144) worked for me even this Monday, I verified this from the terraform plan, But it started to fail two days ago
+ node_server_edition = "MemoryOptimized"
+ node_storage_quota_in_mb = 262144
+ node_vcores = 2
I checked the Azure portal, in the scale section, the storage dropbox of this cluster I provisioned using 262144 is blank now(meaning it's not a valid option any more), the minimum option is 512gb (refer to the screen capture below).
Also you can refer to the document of Microsoft. The minimum storage for multi node cluster is 512gb (for single node it's different, but the node_count can't be set to 1 according to your document and there is no option of "1 node" in Azure portal), and it seems this check was enforced now.
https://learn.microsoft.com/en-us/azure/cosmos-db/postgresql/resources-compute
Can you try to destroy your cluster and recreate it or create a new cluster?
Having the same issue here with a size of 131072
. Seems like the allowed values in Azure changed. It works form 524288
onwards for me.
Issue is still valid, having the same issue with a size of 131072
.
Issue is still valid, having the same with a size of 32768 and it works from 524288 onward for me as mentioned above by @JoshuaSimon
Is there an existing issue for this?
Community Note
Terraform Version
1.5.5
AzureRM Provider Version
3.76.0
Affected Resource(s)/Data Source(s)
azurerm_cosmosdb_postgresql_cluster
Terraform Configuration Files
Debug Output/Panic Output
Expected Behaviour
Expected resource can be created. Actually this parameter works in last week.
From document: node_storage_quota_in_mb - (Optional) The storage quota in MB on each worker node. Possible values are 32768, 65536, 131072, 262144, 524288, 1048576, 2097152, 4194304, 8388608 and 16777216.
Seems 32768, 65536, 131072, 262144 doesn't work in my case? Is this related to the Coordinator disk size?
Actual Behaviour
This failed ╷ │ Error: creating Server Groupsv 2 (Subscription: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" │ Resource Group Name: "PeterTest10252023" │ Server Groupsv 2 Name: "peter1025primary"): performing Create: unexpected status 400 with error: bad_request: Worker disk size of 262144 is not allowed. │ │ with azurerm_cosmosdb_postgresql_cluster.primary_replica, │ on main.tf line 47, in resource "azurerm_cosmosdb_postgresql_cluster" "primary_replica": │ 47: resource "azurerm_cosmosdb_postgresql_cluster" "primary_replica" { │ │ creating Server Groupsv 2 (Subscription: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" │ Resource Group Name: "PeterTest10252023" │ Server Groupsv 2 Name: "peter1025primary"): performing Create: unexpected status 400 with error: bad_request: Worker disk size of 262144 is not allowed. ╵
Steps to Reproduce
No response
Important Factoids
No response
References
No response