PaloAltoNetworks / terraform-provider-panos

Terraform Panos provider
https://www.terraform.io/docs/providers/panos/
Mozilla Public License 2.0
87 stars 71 forks source link

panos_panorama_address_object being recreated after upgrading to v1.7.0 #256

Closed seanyoungberg closed 3 years ago

seanyoungberg commented 3 years ago

Describe the bug

After upgrading to 1.7.0, having some issues with panos_panorama_address_object. The objects are forcing recreation which then causes an error with the NAT policies that are referencing those objects.

Expected behavior

Address object should not be recreated when moving to 1.7.0 provider

Current behavior

When running apply with 1.7.0 against state that was previously deployed with 1.6.3, the address objects are attempted to be replaced. There is a NAT policy referencing this object so the apply fails as the address object cannot be deleted since it is still being referenced.

Terraform will perform the following actions:

  # panos_panorama_address_object.this must be replaced
-/+ resource "panos_panorama_address_object" "this" {
        description  = "The 192.168.80 network"
        device_group = "shared"
      ~ id           = "shared:localnet" -> (known after apply)
        name         = "localnet"
      - tags         = [] -> null
        type         = "ip-netmask"
        value        = "192.168.80.0/24"
      + vsys         = "vsys1" # forces replacement
    }

Plan: 1 to add, 0 to change, 1 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

panos_panorama_address_object.this: Destroying... [id=shared:localnet]

Error:  device-group -> tn-aws -> pre-rulebase -> nat -> rules -> foob -> dynamic-destination-translation -> translated-address

Possible solution

Tried with create before destroy lifecycle on the address object, which seemed to succeed in create a new object, but subsequent apply tried to destroy the "deposed object" and still had same error with reference from NAT policy

Steps to reproduce

First execute this with 1.6.3 provider and then run again with 1.7.0 provider.

resource "panos_panorama_address_object" "this" {
    name = "localnet"
    value = "192.168.80.0/24"
    description = "The 192.168.80 network"
}

resource "panos_panorama_nat_rule_group" "this" {
    device_group = "tn-aws"
    rule {
        name = "foob"
        description = "foo - Managed by Terraform"
        original_packet {
            source_zones = ["outbound"]
            destination_zone = "outbound"
            destination_interface = "any"
            source_addresses = ["any"]
            destination_addresses = ["any"]
            service = "service-https"
        }
        translated_packet {
            source {
            dynamic_ip_and_port {
                interface_address {
                interface = "ethernet1/1"
                }  
            }  
            }
            destination {
                dynamic_translation {
                    address = panos_panorama_address_object.this.name
                    port = "443"
                }
            }
        }
    }
    depends_on = [panos_panorama_address_object.this]
}

Context

Caused confusion and time spent troubleshooting. For now reverted back to 1.6.3.

Your Environment

Tested with tf 13.5 and 14.x

shinmog commented 3 years ago

The problem is that I added a new ForceNew: true param in both directions (device_group from the NGFW's perspective, vsys from Panorama's perspective), and Terraform is interpreting this as a change that requires redeploying the config.

Working on finding a solution.

shinmog commented 3 years ago

Ok, I think I've fixed this. Please check out v1.7.1 of the provider. There are now migrate state functions that should deal with this.

shinmog commented 3 years ago

@seanyoungberg

Does 1.7.1 / 1.8 fix the issue for you?

seanyoungberg commented 3 years ago

@shinmog Yes, indeed. Works fine on 1.8. Recently got confirmation from someone that initial discovered this on a more complex code / state. Thanks for the super-quick turnaround!