Open JCMTrackman opened 2 years ago
I had the same issue, from the offset it looks like whatever process is converting the GB to bytes is running out of space for the figure. To get this running while someone fixes it, try 5gb and then manually change it in both Azure and the terraform config, not ideal but it allowed me to continue with my work.
I did some testing on this issue, and I was able to create/update the database, as long as using it one on the following values with sku_name= "S0": 1, 2, 5, 10, 20, 30, 40, 50, 100, 150, 200, 250. Different values fail even using Powershell.
@JCMTrackman, @wc-whiteheadd, can you re-test this, please?
We're having the same issue with sku_name = "S3"
and max_size_gb = 250
.
Search over the repo showed me this line of code: https://github.com/hashicorp/terraform-provider-azurerm/blob/2f9fc6749f5d50a733f36237cdfafd877b49bf92/internal/services/mssql/mssql_database_resource.go#L270
Unfortunately, I am (yet) not familiar with Go syntax. Could it be that v.(int)
forces the entire expression v.(int) * 1073741824
to be int
, so it overflows?
Additional question: could it be windows-related? It seems that we have this problem only on our local devices, and it works ok on our linux devops agents
@egorshulga I just retested your scenario (S3, 250GB) and i had no issue :( And for your additional question, I am using windows to test this, so doesn't look like a direct SO problem.
Got the error with S0 and 2 GB max size
max_size_gb = 2 read_scale = false sku_name = "S0" ╷ StatusCode=400 -- Original Error: Code="InvalidMaxSizeTierCombination" Message="The tier 'Standard' does not support the database max size '-2147483648'."
Hey, I got the same error when using sku_name = "DW100c"
.
Error: Code="InvalidMaxSizeTierCombination" Message="The tier 'DataWarehouse' does not support the database max size '2147483648000'
I am having same issue with max_size_gb = (tried difference value, 5, 10, 20, 30, 50) read_scale = false sku_name = "ElasticPool"
but result is the same terraform the tier 'standard' does not support the database max size '-2147483648'
I did test and applied DB without max_size_gb and sku_name (with null values) and database was created with default value 250GB Then adjusted value from portal 250 => 30GB and add in terraform both fields max_size_gb and sku_name did apply and no infrastructure changes
So seem during creation given value (regardless it is 5, 10, 20 or 50) conversion to bytes is not done properly
and one more test, when already adjusted 30GB in state and portal tried to change values in terraform 30 => 50GB, apply was successful but changes appeared on portal, it remains the same 30GB AzureRM Provider Version 3.7.0
Affected Resource(s)/Data Source(s) azurerm_mssql_database
Please help to understand
Is there an existing issue for this?
Community Note
Terraform Version
1.2.4
AzureRM Provider Version
3.13.0
Affected Resource(s)/Data Source(s)
azurerm_mssql_database
Terraform Configuration Files
Debug Output/Panic Output
Expected Behaviour
The database gets created normally
Actual Behaviour
The max size get overflown (Possibly), and it goes to negative
Steps to Reproduce
terraform apply
Important Factoids
No response
References
No response