Closed erwabo closed 4 years ago
Thanks for opening this issue. May I ask what's the value of var.envtype and var.level? Could you provide the tfconfig with real value not variable name otherwise we cannot repro it?
The var.level is any one of those numbers 1-10 or a value of "POC" and var.envtype is "DEV" or "PRD" and the level will determine all the values of these variables which map the elastic pool and DB values. As I mentioned previously, this has worked for months and we have not changed our TF code. Let me know if you need any other information. THanks!
variable "ep_name" { type = map default = { "POC" = "BC_Gen5" "1" = "PremiumPool" "2" = "BC_Gen5" "3" = "BC_Gen5" "4" = "BC_Gen5" "5" = "BC_Gen5" "6" = "BC_Gen5" "7" = "BC_Gen5" "8" = "BC_Gen5" "9" = "BC_Gen5" "10" = "BC_Gen5" }
}
variable "ep_edition" { default = { "POC" = "BusinessCritical" "1" = "Premium" "2" = "BusinessCritical" "3" = "BusinessCritical" "4" = "BusinessCritical" "5" = "BusinessCritical" "6" = "BusinessCritical" "7" = "BusinessCritical" "8" = "BusinessCritical" "9" = "BusinessCritical" "10" = "BusinessCritical" }
}
variable "ep_compute_gen" { default = { "POC" = "Gen5" "1" = null "2" = "Gen5" "3" = "Gen5" "4" = "Gen5" "5" = "Gen5" "6" = "Gen5" "7" = "Gen5" "8" = "Gen5" "9" = "Gen5" "10" = "Gen5" }
}
variable "ep_size" { type = map default = { "POC" = "1073741824000" #1000GG "1" = "536870912000" #512GB "2" = "805306368000" #768GB "3" = "805306368000" #768GB "4" = "805306368000" #768GB "5" = "1073741824000" #1000GB "6" = "1073741824000" #1000GB "7" = "1073741824000" #1000GB "8" = "1073741824000" #1000GB "9" = "1073741824000" #1000GB "10" = "1073741824000" #1000GB
}
}
variable "ep_size_gb" { type = map default = { "POC" = "1000 " #1000GG "1" = "512" #512GB "2" = "768" #768GB "3" = "768" #768GB "4" = "768" #768GB "5" = "1000" #1000GB "6" = "1000" #1000GB "7" = "1000" #1000GB "8" = "1000" #1000GB "9" = "1000" #1000GB "10" = "1000" #1000GB
}
}
variable "vcores" { type = map default = { "POC" = "16" "1" = "250" "2" = "4" "3" = "6" "4" = "8" "5" = "10" "6" = "12" "7" = "14" "8" = "16" "9" = "18" "10" = "20" } }
So I made this much more simple and it still generates the same error. Here is the code....you just need a bacpac to test with and an RG in Azure. Also here is the error being generated again:
_Error: sql.DatabasesClient#CreateImportOperation: Failure sending request: StatusCode=400 -- Original Error: Code="Failed" Message="The async operation failed." InnerError={"unmarshalError":"json: cannot unmarshal array into Go struct field serviceError2.details of type map[string]interface {}"} AdditionalInfo=[{"code":"0","details":[{"code":"0","message":"The specified service level objective 'ElasticPool' does not match the service level objective 'SQLDB_BC_Gen5_6' of database 'GolfStream'.","severity":"16","target":null}],"innererror":[],"message":"The specified service level objective 'ElasticPool' does not match the service level objective 'SQLDB_BC_Gen56' of database 'GolfStream'.","target":null}]
provider "azurerm" { version = "=2.25.0"
features { virtual_machine {
delete_os_disk_on_deletion = true
}
} }
resource "azurerm_resource_group" "rg" { name = "rgname" location = "East US" tags = {
} }
resource "azurerm_sql_database" "golfstream" { name = "yourdb" resource_group_name = azurerm_resource_group.rg.name location = "East US" server_name = azurerm_sql_server.sql.name elastic_pool_name = azurerm_mssql_elasticpool.primary.name edition = "BusinessCritical" max_size_bytes = 805306368000 create_mode = "default"
timeouts { create = "120m" }
import { storage_uri = "https://mystorageaccount.blob.core.windows.net/sqlexports/yourdatabase.bacpac" storage_key = "redacted" storage_key_type = "StorageAccessKey" administrator_login = "redacted" administrator_login_password = "redacted" authentication_type = "SQL" }
}
resource "azurerm_sql_server" "sql" {
name = "sqlserver" resource_group_name = azurerm_resource_group.rg.name location = "East US" version = "12.0" administrator_login = "redacted" administrator_login_password = "redacted" connection_policy = "Redirect"
}
resource "azurerm_mssql_elasticpool" "primary" { depends_on = [azurerm_sql_server.sql] name = "epBojo" resource_group_name = azurerm_resource_group.rg.name location = "East US" server_name = azurerm_sql_server.sql.name max_size_gb = 768
sku { name = "BC_Gen5" #PremiumPool for DTU else ie BC_Gen5 tier = "BusinessCritical" #Premium for DTU Business Critical family = "Gen5" #Gen5 else null capacity = 6 #250T DTU for Lite }
per_database_settings { min_capacity = 0 max_capacity = 6 }
}
This seems to be an API issue. I did the same thing using the Azure REST API and got the same failure
Thanks for your investigation. Suggest to file an issue on azure rest api and paste the issue link here for tracking.
Microsoft issued a fix for this and all is working as expected now!
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!
Community Note
Terraform (and AzureRM Provider) Version
Terraform Version 12.28 AzureRM Provider Version 2.25.0
Affected Resource(s)
azurerm_mssql_elasticpool azurerm_sql_database`
Terraform Configuration Files
Debug Output
Panic Output
sql.DatabasesClient#CreateImportOperation: Failure sending request: StatusCode=400 -- Original Error: Code="Failed" Message="The async operation failed." InnerError={"unmarshalError":"json: cannot unmarshal array into Go struct field serviceError2.details of type map[string]interface {}"} AdditionalInfo=[{"code":"0","details":[{"code":"0","message":"The specified service level objective 'ElasticPool' does not match the service level objective 'P1M250' of database 'PaymentMgmtSystem'.","severity":"16","target":null}],"innererror":[],"message":"The specified service level objective 'ElasticPool' does not match the service level objective 'P1M250' of database 'PaymentMgmtSystem'.","target":null}]
Expected Behavior
Databases to be imported
Actual Behavior
The error listed above...additionally, the databases show up in Azure, but the imports do not seem to be completed and if you rerun the job, TF will say that the databases already exist and need to be imported into the state file.
TO NOTE: This was working fine until yesterday afternoon. Coincidentally our normal production builds failed on DB imports and we had to migrate our sizing from MB to Bytes in our PowerShell commands. It would appear MS changed something on the backend and the provider may need updating.
Steps to Reproduce
terraform apply
Important Factoids
References
0000