PaloAltoNetworks / terraform-azurerm-swfw-modules

Terraform Reusable Modules for Software Firewalls on Azure
https://registry.terraform.io/modules/PaloAltoNetworks/swfw-modules/azurerm
MIT License
4 stars 8 forks source link

[Bug Report] For `loadbalancer` submodule, updating health probes causes `unexpected status 400 (400 Bad Request) with error: `InvalidResourceReference...` #75

Closed jinkang23 closed 2 months ago

jinkang23 commented 2 months ago

Describe the bug

When updating the existing heath probe resource that is referenced by Inbound Load balancer rules of frontend IP configuration causes the following error when running terraform apply:

Error: updating Load Balancer "lb-private" (Resource Group "rg-mygroup-001") for deletion of Probe 
"default_health_probe": performing CreateOrUpdate: unexpected status 400 (400 Bad Request) with error: 
InvalidResourceReference: Resource /subscriptions/<REDACTED>/resourceGroups/rg-mygroup-
001/providers/Microsoft.Network/loadBalancers/lb-private/probes/default_health_probe referenced by resource 
/subscriptions/<REDACTED>/resourceGroups/rg-mygroup-001/providers/Microsoft.Network/loadBalancers/lb-
private/loadBalancingRules/HA-ports was not found. Please make sure that the referenced resource exists, and that 
both resources are in the same region.

Here's an example of change made:

Before...

    "private" = {
      name     = "private"
      zones    = null #? westus does not support zones
      vnet_key = "transit"
      health_probes = {
        default_health_probe = {
          name                = "default_health_probe"
          protocol            = "Tcp"
          port                = 22
          interval_in_seconds = 5
        }
      }
      frontend_ips = {
        "ha-ports" = {
          name               = "private-vmseries"
          subnet_key         = "private"
          private_ip_address = "10.0.0.1"
          in_rules = {
            HA_PORTS = {
              name             = "HA-ports"
              port             = 0
              protocol         = "All"
              health_probe_key = "default_health_probe"
            }
          }
        }
      }
    }

After...

    "private" = {
      name     = "private"
      zones    = null #? westus does not support zones
      vnet_key = "transit"
      health_probes = {
        default_health_probe2 = {  #<-- updated this!
          name                = "default_health_probe2" #<-- updated this!
          protocol            = "Tcp"
          port                = 22
          interval_in_seconds = 5
        }
      }
      frontend_ips = {
        "ha-ports" = {
          name               = "private-vmseries"
          subnet_key         = "private"
          private_ip_address = "10.0.0.1"
          in_rules = {
            HA_PORTS = {
              name             = "HA-ports"
              port             = 0
              protocol         = "All"
              health_probe_key = "default_health_probe2"  #<-- updated this!
            }
          }
        }
      }
    }

Workaround is to add the following lifecycle block to azurerm_lb_probe resource:

resource "azurerm_lb_probe" "this" {
  for_each = merge(coalesce(var.health_probes, {}), local.default_probe)

  loadbalancer_id = azurerm_lb.this.id

  name     = each.value.name
  protocol = each.value.protocol
  port = contains(["Http", "Https"], each.value.protocol) && each.value.port == null ? (
    local.default_http_probe_port[each.value.protocol]
  ) : each.value.port
  probe_threshold     = each.value.probe_threshold
  interval_in_seconds = each.value.interval_in_seconds
  request_path        = each.value.protocol != "Tcp" ? each.value.request_path : null

  # this is to overcome the discrepancy between the provider and Azure defaults
  # for more details see here -> https://learn.microsoft.com/en-gb/azure/load-balancer/whats-new#known-issues:~:text=SNAT%20port%20exhaustion-,numberOfProbes,-%2C%20%22Unhealthy%20threshold%22
  number_of_probes = 1

  lifecycle {
    create_before_destroy = true
  }

}

Module Version

3.0.1

Terraform version

1.9.2

Expected behavior

No response

Current behavior

No response

Anything else to add?

No response

acelebanski commented 2 months ago

Hello @jinkang23, thanks for raising this. This issue will be fixed with PR #78.