IBM-Cloud / terraform-provider-ibm

https://registry.terraform.io/providers/IBM-Cloud/ibm/latest/docs
Mozilla Public License 2.0
341 stars 670 forks source link

Unable to create multiple `ibm_logs_outgoing_webhook` resources in a for_each loop #5734

Open ocofaigh opened 1 month ago

ocofaigh commented 1 month ago

Community Note

Terraform CLI and Terraform IBM Provider Version

tf 1.9.2 ibm provider 1.70.1

Affected Resource(s)

Terraform Configuration Files

Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.

resource "ibm_logs_outgoing_webhook" "en_integration" {
  for_each    = { for idx, en in var.existing_en_instances : idx => en }
  instance_id = ibm_resource_instance.cloud_logs.guid
  region      = var.region
  name        = each.value.en_integration_name
  type        = "ibm_event_notifications"

  ibm_event_notifications {
    event_notifications_instance_id = each.value.en_instance_id
    region_id                       = each.value.en_region
  }
}

Debug Output

│ Error: ---
│ id: terraform-693866e6
│ summary: 'CreateOutgoingWebhookWithContext failed: No authentication information in
│   RequestContext'
│ severity: error
│ resource: ibm_logs_outgoing_webhook
│ operation: create
│ component:
│   name: github.com/IBM-Cloud/terraform-provider-ibm
│   version: 1.70.1
│ ---
│ 
│ 
│   with module.observability_instances.module.cloud_logs[0].ibm_logs_outgoing_webhook.en_integration["1"],
│   on ../../modules/cloud_logs/main.tf line 104, in resource "ibm_logs_outgoing_webhook" "en_integration":
│  104: resource "ibm_logs_outgoing_webhook" "en_integration" {
│ 

Panic Output

Expected Behavior

No error

Actual Behavior

If you try to create more than 1 ibm_logs_outgoing_webhook resource, you get the error above. It seems like the backend cannot handle the requests in such quick succession.

Steps to Reproduce

  1. terraform apply

Important Factoids

If I wait some time, and re-apply - it passes, its definitely a timing issue. I cannot find any way to add a sleep inside the for_each loop in terraform. It seems its not possible, so perhaps workaround could be added to the provider to retry with a backoff wait.

References