hashicorp / terraform-provider-azurerm

Terraform provider for Azure Resource Manager
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs
Mozilla Public License 2.0
4.6k stars 4.64k forks source link

Update managed disk with old Create option "Copy" and source_resource_id which does not exist: Fail #9154

Open AndiB42 opened 4 years ago

AndiB42 commented 4 years ago

Community Note

Terraform (and AzureRM Provider) Version

Affected Resource(s)

Terraform Configuration Files

resource "azurerm_managed_disk" "our_service_disk" {
  name                 = "our-service-name"
  location             = var.location
  resource_group_name  = azurerm_resource_group.disks_rg.name
  storage_account_type = "Standard_LRS"
  create_option        = "Empty"
  disk_size_gb         = "5"

  tags = {
     environment = var.env
     module  = "disks"
     product = "our-product"
  }

  lifecycle {
    ignore_changes = [create_option, source_resource_id]

  }
}

Expected Behavior

We want to update our tags for existing Disks in the resource-group. The Terraform-Plan shows us the following:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # module.disks.azurerm_managed_disk.our_service_disk will be updated in-place
  ~ resource "azurerm_managed_disk" "our_service_disk" {
        create_option        = "Copy"
        disk_iops_read_write = 500
        disk_mbps_read_write = 60
        disk_size_gb         = 5
        id                   = "/subscriptions/NEW-SUBSCRIPTION/resourceGroups/our-disks-rg/providers/microsoft.compute/disks/our-service-name"
        location             = "westeurope"
        name                 = "our-service-name"
        resource_group_name  = "our-disks-rg"
        source_resource_id   = "/subscriptions/OLD-SUBSCRIPTION/resourceGroups/NOT-EXISTING-DISKS-SNAPSHOTS-RG/providers/Microsoft.Compute/snapshots/our-service-name-snapshot"
        storage_account_type = "Standard_LRS"
      ~ tags                 = {
          + "environment" = "stable"
          + "module"      = "disks"
            "product"     = "our-product"
        }
        zones                = []
    }

Actual Behavior

Terraform shows us this error:

Error: Error creating/updating Managed Disk "our-service-name" (Resource Group "our-disks-rg"): compute.DisksClient#CreateOrUpdate: Failure sending request: StatusCode=403 -- Original Error: Code="LinkedAuthorizationFailed" Message="The client '12345(service-principal-id)' with object id '12345(service-principal-object-id)' has permission to perform action 'Microsoft.Compute/disks/write' on scope '/subscriptions/NEW-SUBSCRIPTION/resourceGroups/our-disks-rg/providers/Microsoft.Compute/disks/our-service-name'; however, it does not have permission to perform action 'Microsoft.Compute/disks/beginGetAccess/action' on the linked scope(s) '/subscriptions/OLD-SUBSCRIPTION/resourceGroups/NOT-EXISTING-DISKS-SNAPSHOTS-RG/providers/Microsoft.Compute/snapshots/our-service-name-snapshot' or the linked scope(s) are invalid."

Important Factoids

To understand the backgrounds of the issue, i have to explain the "history" of this disk. We've created this disk months ago in a subscription (here called: OLD-SUBSCRIPTION). Previously, it was created manually (not via Terraform) and it was a copy of a specific snapshot.
Then we've created two new cluster-setups in the so called NEW-SUBSCRIPTION, and this disk should be attached to one of the clusters. Because this Disk contains very important live-data for us, we've just moved the disk, from the old subscription to the new one. The other cluster should get a new Disk, so we've configured the create_option = "Empty". And we wanted to add the old disk in a Terraform-Managed manner. So we've added the ignore_changes part for create_option and source_resource_id and imported the existing disk with a Terraform import command to our Terraform-State. In short: Both disks are in basically the same setup in two cluster-setups. Both should be managed in Terraform and the create-option should be ignored by Terraform. In fact, the Plan looks as expected, but in the execution, the source_resource_id seems to be parsed... This can fail of course, because the Service_principal has no permission for this Subscription AND the searched Resource-Group does not exist anymore.

If we remove the ignore_changes part for create-option and source_resource_id, Terraform wants to recreate this "old" disk, what we want to avoid.

Steps to Reproduce

(I can only assume these steps to reproduce)

  1. Create a disk from a snapshot, manually in Azure Portal
  2. Move it to another Subscription
  3. Delete the Source-Snapshot Resource-Group
  4. Add this Disk to Terraform-Project (Configured: create_option = "Empty" / ignore_changes = [create_option, source_resource_id])
  5. Import the Existing Disk (in the new Subscription) to the Terraform-State
  6. Update another configuration, like tags
  7. Terraform apply
mpjtaylor commented 2 years ago

was any resolution found to this?

I face the same issue wherein i create disks from snapshots and then attach the disks, but the snapshots are irrelevant beyond the disk creation so i want to forget them ignore they are required as a data source.

resource "azurerm_managed_disk" "os_disk" { count = var.migration ? 1 : 0

 name                                            = lower("${local.virtual_machine_name}_OSDisk")
  create_option                              = "Copy"
  public_network_access_enabled = false
  zone                                            = local.primary_zone
  resource_group_name                = data.azurerm_resource_group.virtualmachine.name
  location                                       = data.azurerm_resource_group.virtualmachine.location
  storage_account_type                 = var.vm_storage_account_type
  source_resource_id                     = data.azurerm_snapshot.os_disk_snapshot[0].id
  disk_size_gb                                = data.azurerm_snapshot.os_disk_snapshot[0].disk_size_gb
  hyper_v_generation                    = var.hyper_v_generation
}