Closed alias-santi closed 1 year ago
@alias-santi the metastore_assignment resource is unintuitive right now (since the metastore_summary API does not give all the necessary information)
Platform team is building an account-level API that should resolve this, but we are still waiting on an ETA for that
@nkvuong thanks for coming back. Assume it would be as such. For now we can live with the issue and add a retry on a pipeline etc for the apply as it usually comes back fine on the second apply..
This should be fixed in 1.25.0
Configuration
providers.tf
data.tf
locals.tf
main.tf
Expected Behavior
Metastore assignments to have been applied succesfully
Actual Behavior
╷ │ Error: cannot read metastore assignment: No metastore assigned for the current workspace. │ │ with databricks_metastore_assignment.default_metastore[""],
│ on main.tf line 185, in resource "databricks_metastore_assignment" "default_metastore":
│ 185: resource "databricks_metastore_assignment" "default_metastore" {
│
╵
Steps to Reproduce
Terragrunt apply (we're using terragrunt as a wrapper although issue persists with a usual terraform apply when supply tfvars)
Terraform and provider versions
Terraform v1.0.9 on darwin_arm64
Debug Output
Important Factoids
This seems to be an intermittent issue from testing when destroying and re-applying resources. In this POC, we're supply two workspace IDs as a var to do the metastore assignment. It would appear as thought the put request to do the assignment works fine but the read operation that the provider does to validate successful creation after seems to periodically come back with a 404 as if the metastore_summary api hasn't quite updated in time to show the newly assigned metastore.
We're using a workspace provider that points to one of the workspace urls per the comments in that we found in online digging - https://discuss.hashicorp.com/t/databricks-unity-catalog-account-vs-workspace-level-understanding/42570#:~:text=Unity%20Catalog%20API,as%20the%20host. The issue persists even when using dedicated providers per workspace and not provisioning the resource with a for_each block.
Is this an expected behaviour? This feels like there needs to be a slight delay between the create and read operations to give the metastore_summary endpoint enough time to reflect new changes or the metastore_summary api itself maybe is the issue?
For now, we just run a re-apply and all is well but looking for any advice/comments on this suspected bug!
Thanks