Closed robinfrankhuizen closed 1 year ago
I would say that this is bug in platform or something like that. Maybe azurerm provider reports success too early. But it's better to escalate there, as it less dependent on Databricks provider and more on readiness of Azure workspace.
You might be right, that it's maybe more an Azure related issue. Not the Databricks provider calling the API.
I have the feeling that the Storage-Account is ready, but the containers are not. If the global-init-script is created directly after the Workspace becomes available, the storage exception appears (sometimes). The Global init script is saved on one of the containers on the Storage-Account. Other things like Workspace-settings or groups/users are created without any issue (ever).
Following up - is this issue still relevant?
We have solved it by adding a time_sleep with a dependson the workspace id using on_create 30s timeout. The global init script waits 30 second with depends_on this time_sleep.
Closing as it was solved by adding the delay after workspace creation
For future searchers: also encountered this issue with the following error message in the logs:
HTTP/2.0 503 Service Unavailable
{
"error_code": "TEMPORARILY_UNAVAILABLE",
"message": "Missing credentials to access Azure container"
}
Seems like a similar issue, adding a wait similarly seems to have solved it
We ran into the following issue when using terraform to deploy Databricks on Azure.
Configuration
Our configuration is subdivided into modules but the relevant parts are below.
Expected Behavior
A Databricks workspace is deployed and a global init script is added.
Actual Behavior
An error occurred:
Steps to Reproduce
terragrunt apply
Terraform and provider versions
hashicorp/azurerm v3.34.0 databricks/databricks v1.3.0
Debug Output
Important Factoids