hashicorp / terraform-provider-azurerm

Terraform provider for Azure Resource Manager
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs
Mozilla Public License 2.0
4.62k stars 4.65k forks source link

Creating multiple azurerm_kusto_cluster_managed_private_endpoint at once causes [Conflict] errors on cluster #18501

Open ncbrown1 opened 2 years ago

ncbrown1 commented 2 years ago

Is there an existing issue for this?

Community Note

Terraform Version

1.2.9

AzureRM Provider Version

3.23.0

Affected Resource(s)/Data Source(s)

azurerm_kusto_cluster_managed_private_endpoint

Terraform Configuration Files

locals {
  kusto_storage_pairs = [
    for pair in setproduct( keys(var.kusto_clusters), var.storage_account_names ): {
      name = "${pair[0]}-${pair[1]}"
      kusto_cluster_name = pair[0]
      storage_account_name = pair[1]
    }
  ]
}

module "kusto_managed_pe" {
  source = "../../unmanaged-modules/kusto-managed-pe"
  for_each = {for pair in local.kusto_storage_pairs: pair.name => pair}

  ingestion_storage_account_name = each.value.storage_account_name
  kusto_cluster_name = each.value.kusto_cluster_name
  resource_group = var.resource_group
}

### MODULE CONTENTS
data "azurerm_storage_account" "ingestion_storage_account" {
  name                = var.ingestion_storage_account_name
  resource_group_name = var.resource_group
}

resource "azurerm_kusto_cluster_managed_private_endpoint" "managed_pe" {
  name = "${var.kusto_cluster_name}-${var.ingestion_storage_account_name}"

  resource_group_name          = var.resource_group
  cluster_name                 = var.kusto_cluster_name
  private_link_resource_id     = data.azurerm_storage_account.ingestion_storage_account.id
  private_link_resource_region = data.azurerm_storage_account.ingestion_storage_account.location
  group_id                     = "blob"
  request_message              = "Please approve for ingestion."
}

Debug Output/Panic Output

│ Error: waiting for creation/update of Managed Private Endpoints: (Managed Private Endpoint Name "lilogsusprod00-inlogsplatformeusp01" / Cluster Name "lilogsusprod00" / Resource Group "rg-eus-inlogs-prod-01"): Code="ServiceIsInMaintenance" Message="[Conflict] Cluster 'lilogsusprod00' is in process of maintenance for a short period. You may retry to invoke the operation in a few minutes."
│
│   with module.kusto_managed_pe["lilogsusprod00-inlogsplatformeusp01"].azurerm_kusto_cluster_managed_private_endpoint.managed_pe,
│   on ../../unmanaged-modules/kusto-managed-pe/main.tf line 10, in resource "azurerm_kusto_cluster_managed_private_endpoint" "managed_pe":
│   10: resource "azurerm_kusto_cluster_managed_private_endpoint" "managed_pe" {

Expected Behaviour

If this is a matter of locking the kusto cluster resource, each azurerm_kusto_cluster_managed_private_endpoint resource should wait to start executing until it can claim the lock so that the entire terraform apply operation does not fail and each resource can be created sequentially.

Actual Behaviour

Only one azurerm_kusto_cluster_managed_private_endpoint resource can be provisioned successfully at a time, then any additional resources fail because they believe that the cluster is under maintenance (likely because the first resource provisioning operation claimed some sort of lock on the cluster)

Steps to Reproduce

  1. Write terraform code which references one single Kusto cluster and multiple private link resources
  2. Add a for_each looped azurerm_kusto_cluster_managed_private_endpoint resource block which creates one managed private endpoint for each pair of (kusto cluster, private link resource)
  3. terraform apply

Important Factoids

No response

References

Very similar to issue #16471

ncbrown1 commented 2 years ago

For future readers, this is my current workaround:

  1. Modify the azurerm_kusto_cluster_managed_private_endpoint to include a depends_on meta-argument to ensure that it will only be provisioned after all of your Kusto Clusters and private link resources are created.
  2. Run terraform apply and wait until the first provisioning/locking error occurs
  3. Run terraform plan -out tfplan.out -target module.kusto_managed_pe or whatever module containing your azurerm_kusto_cluster_managed_private_endpoint resources is, so that you can scope the next apply operation to only include the managed private endpoint resources
  4. Run terraform apply -parallelism=1 tfplan.out so each managed private endpoint is created sequentially without parallelism.
  5. Finish the rest of your provisioning steps by running terraform apply again with default parallelism.