hashicorp / terraform-provider-azurerm

Terraform provider for Azure Resource Manager
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs
Mozilla Public License 2.0
4.52k stars 4.6k forks source link

azurerm_site_recovery_replicated_vm - On Import with Managed Disks Encrypted with PMK are "target_disk_encryption" and "target_disk_encryption_set_id" required? #23159

Open JD-Phx opened 1 year ago

JD-Phx commented 1 year ago

Is there an existing issue for this?

Community Note

Terraform Version

1.5.5

AzureRM Provider Version

3.65

Affected Resource(s)/Data Source(s)

azurerm_site_recovery_replicated_vm

Terraform Configuration Files

resource "azurerm_site_recovery_replicated_vm" "vm-replication" {
  name                                      = "vm-replication"
  resource_group_name                       = azurerm_resource_group.secondary.name
  recovery_vault_name                       = azurerm_recovery_services_vault.vault.name
  source_recovery_fabric_name               = azurerm_site_recovery_fabric.primary.name
  source_vm_id                              = azurerm_virtual_machine.vm.id
  recovery_replication_policy_id            = azurerm_site_recovery_replication_policy.policy.id
  source_recovery_protection_container_name = azurerm_site_recovery_protection_container.primary.name

  target_resource_group_id                = azurerm_resource_group.secondary.id
  target_recovery_fabric_id               = azurerm_site_recovery_fabric.secondary.id
  target_recovery_protection_container_id = azurerm_site_recovery_protection_container.secondary.id

  managed_disk {
    disk_id                    = azurerm_virtual_machine.vm.storage_os_disk[0].managed_disk_id
    staging_storage_account_id = azurerm_storage_account.primary.id
    target_resource_group_id   = azurerm_resource_group.secondary.id
    target_disk_type           = "Premium_LRS"
    target_replica_disk_type   = "Premium_LRS"
  }

  network_interface {
    source_network_interface_id   = azurerm_network_interface.vm.id
    target_subnet_name            = azurerm_subnet.secondary.name
    recovery_public_ip_address_id = azurerm_public_ip.secondary.id
  }

  depends_on = [
    azurerm_site_recovery_protection_container_mapping.container-mapping,
    azurerm_site_recovery_network_mapping.network-mapping,
  ]
}

Debug Output/Panic Output

Error: Incorrect attribute value type

   on main.tf line XX, in resource "azurerm_site_recovery_replicated_vm" "vm-replication":
managed_disk = [{
disk_id                       = azurerm_virtual_machine.vm.storage_os_disk[0].managed_disk_id
staging_storage_account_id    = azurerm_storage_account.primary.id
target_disk_type              = "Premium_LRS"
target_replica_disk_type      = "Premium_LRS"
target_resource_group_id      =  azurerm_resource_group.secondary.id
}]

Inappropriate value for attribute "managed_disk": element 0: attributes
"target_disk_encryption" and "target_disk_encryption_set_id" are required.

Expected Behaviour

The provider documentation shows that these elements should be optional: https://registry.terraform.io/providers/hashicorp/azurerm/3.65.0/docs/resources/site_recovery_replicated_vm#target_disk_encryption_set_id

So on a plan run, this should not error.

Actual Behaviour

This is an import that I'm attempting, I'm unsure if this due to the disks being encrypted with platform managed keys (that the details cannot be accessed for) is causing these to be flagged as required, but this is a plan, so these shouldn't be required, but rather show as a change on at the end.

Steps to Reproduce

No response

Important Factoids

No response

References

No response

ziyeqf commented 1 year ago

Hi @JD-Phx , thanks for opening this issue.

Could you please update your Terraform and azurerm provider, then give it another try with the environment variable TF_LOG=DEBUG? if it failed with the same error again, please provide the log. (The log could be redirected to a file by TF_LOG_PATH variable. Here is the document about log)

Thanks.

JD-Phx commented 1 year ago

I've updated the Terraform Version, and Provider version to the latest, and there is no change on the output.

This is running in a live environment, so I will have to sanitise the debug file before uploading it. There's also a lot of resources within the environment, so I'll try and reduce the unnecessary resources. I may have to setup a dev environment with just the minimum resources to redeploy this resource and replicate the issue.

ziyeqf commented 1 year ago

Hey @JD-Phx

I tried to reproduce the error, but I cant. Could you please some more details about how it happened? Here is my assumption, please correct me.

  1. create the replicated vm by other tools (Portal/AzureCli/...)
  2. add the config in the description to terraform config.
  3. run terraform import azurerm_site_recovery_replicated_vm.vm-replication ......
  4. run terraform plan <- where it errored out.

And, just to confirm: the log shows it was errored on example1 but the config was vm-replication, was it caused by manually editing?

JD-Phx commented 1 year ago

Hi @ziyeqf

I appreciate you trying to replicate the issue for me. I'll see if I can't put something together today to replicate it, as the clients main subscription is throwing a lot of these errors. That's pretty close:

  1. Resources were created by a 3rd party with unknown means, but assuming the portal.
  2. aztfexport tooling was used to extract config for resources, including the import block
  3. Run plan via CLI on a Terraform Cloud workspace. This is where it errors out.

It wouldn't be so bad if the plan ran, and just told me what config it's going to change, and by the documentation, those are not required properties, so I should be able to run the plan without them. I could then extract the values and add them in. It seems like the provider has these flagged as required, and I'm not sure they should be. If this resource were deploy with MS Platform Managed Keys, you wouldn't know these values, and there's nothing within the resource block to specify its CMK or PMK.

I've also tried adding the disks in via a data block, and calling the values that way as they are unknown to me, and cannot be accessed via the portal. Unfortunately, when the plan runs, it suggests that the values are blank/empty/null, which I suspect is a permissions issue, as the resource is actually MS managed.

Yes, apologies, that's my poor manual editing skills. The config is taken from the documentation page, but matches what is being imported, and the output is from the live environment, and I didnt match the name up properly. I've rectified that for future viewers.

ziyeqf commented 1 year ago

Hey @JD-Phx I have reproduced it successfully by using aztfexport. Please modify the managed_disk block from

managed_disk = [{
disk_id=...
.....
}]

to

managed_disk {
disk_id=...
}

Then it should work.

For any further questions please leave comments. Thanks!

JD-Phx commented 1 year ago

Hi @ziyeqf

I tried updating the code with the [ ] removed, but I now get a different error:

│ Error: Incorrect attribute value type │ 5048: managed_disk = { | ## middle section removed due to sensitive data ## │ 5054: } │ Inappropriate value for attribute "managed_disk": set of object required.

There appears to be a underscore squiggle (like you get with a spelling mistake) before the opening { in my vs code terminal, that hasn't copied across.

IGNORE THIS: I didn't remove the = sign!

JD-Phx commented 1 year ago

Thank you @ziyeqf!! That's amazing work, I'd have never worked that out! It's odd how terraform returned an error that makes sense, but it was actually the format of the managed disk block that was incorrect! I'll definitely keep an eye out for that one.