hashicorp / terraform-provider-azurerm

Terraform provider for Azure Resource Manager
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs
Mozilla Public License 2.0
4.53k stars 4.6k forks source link

Can't migrate azurerm_virtual_machine with custom_data to azurerm_linux_virtual_machine #7234

Open brownoxford opened 4 years ago

brownoxford commented 4 years ago

Community Note

Terraform (and AzureRM Provider) Version

Terraform v0.12.17

Affected Resource(s)

Terraform Configuration Files

Old Config

resource "azurerm_virtual_machine" "this" {

  name                          = "${local.prefix}-vm"
  location                      = azurerm_resource_group.this.location
  resource_group_name           = azurerm_resource_group.this.name
  network_interface_ids         = ["${azurerm_network_interface.this.id}"]
  vm_size                       = "Standard_B2ms"
  delete_os_disk_on_termination = true

  boot_diagnostics {
    enabled     = true
    storage_uri = azurerm_storage_account.this.primary_blob_endpoint
  }

  storage_os_disk {
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Premium_LRS"
    name              = "${local.prefix}-vm-boot"
    os_type           = "Linux"
  }

  storage_image_reference {
    offer     = "UbuntuServer"
    publisher = "Canonical"
    sku       = "18.04-LTS"
    version   = "latest"
  }

  os_profile {
    admin_username = "**REDACTED**"
    computer_name  = "${local.prefix}-vm"
    custom_data    = file("${path.module}/cloud-init.yaml")
  }

  os_profile_linux_config {
    disable_password_authentication = true
    ssh_keys {
      path     = "/home/**REDACTED**/.ssh/authorized_keys"
      key_data = "**REDACTED**"
    }
  }
}

New Config

resource "azurerm_linux_virtual_machine" "this" {
  admin_username = "**REDACTED**"
  custom_data           = filebase64("${path.module}/cloud-init.yaml")
  location              = azurerm_resource_group.this.location
  name                  = "${local.prefix}-vm"
  network_interface_ids = ["${azurerm_network_interface.this.id}"]
  resource_group_name   = azurerm_resource_group.this.name
  size                  = "Standard_B2ms"

  admin_ssh_key {
    public_key = "**REDACTED**"
    username   = "**REDACTED**"
  }

  boot_diagnostics {
    storage_account_uri = azurerm_storage_account.this.primary_blob_endpoint
  }

  os_disk {
    caching              = "ReadWrite"
    name                 = "${local.prefix}-vm-boot"
    storage_account_type = "Premium_LRS"
  }

  source_image_reference {
    offer     = "UbuntuServer"
    publisher = "Canonical"
    sku       = "18.04-LTS"
    version   = "latest"
  }
}

Debug Output

Panic Output

Expected Behavior

I expected terraform import to import the existing configuration fully so that I would be able to migrate from legacy azurerm_virtual_machine to azurerm_linux_virtual_machine without having to destroy and re-create vm instances.

Actual Behavior

The terraform import does not seem to recognize existing os_profile.custom data and subsequent plan or apply views custom_data as being new and triggers a destroy/recreate.

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # module.mgo-beta.azurerm_linux_virtual_machine.this must be replaced
-/+ resource "azurerm_linux_virtual_machine" "this" {
        admin_username                  = "**REDACTED**"
        allow_extension_operations      = true
      ~ computer_name                   = "mgo-beta-vm" -> (known after apply)
      + custom_data                     = (sensitive value)
      ...
    }

Plan: 1 to add, 0 to change, 1 to destroy.

Steps to Reproduce

  1. Have an existing azurerm_virtual_machine with custom data specified in os_profile.custom_data.
  2. Create new azurerm_linux_virtual_machine configuration using values translated from existing azurerm_virtual_machine
  3. Remove old item from state with terraform state rm <your old azurerm_virtual_machine>
  4. Import existing virtual machine as an azurerm_linux_virtual_machine with terraform import ...
  5. Run terraform plan
  6. Observe plan wants to destroy/recreate vm

Important Factoids

N/A

References

N/A

ArcturusZhang commented 4 years ago

Hi @brownoxford

Thanks for opening this issue!

In Azure, the custom_data of a VM only takes effect during the VM's creation, it would never be re-invoke again, and as a consequence, Azure does not return the custom_data of a VM after the provision of a VM succeeded. Another consideration is that the custom_data may possibly contain sensitive data, therefore Azure will not return it.

Since Azure would not return the custom_data, terraform can do nothing when you are importing a VM into state but can only leave it empty. The custom_data is also a ForceNew attribute, therefore you get the situation you described in this issue.

Based on the nature of custom_data and Azure's behaviour on this, there is really not much we can do from the provider side. To solve your situation, you could either add a

lifecycle {
ignore_changes = [
    custom_data
]
}

to let terraform ignore the changes on custom_data, or you could manually modify the state file to add the custom_data back in. Similar situation would also happen to some password attributes, if Azure does not return those attributes, you will run into the same situation as this.

kev-in-shu commented 4 years ago

Hi,

If the custom_data is a ForceNew attribute, I would have expected this to be marked in the plan with "# forces replacement".

Especially because in the previous "azurerm_virtual_machine" resource, changes to the custom_data did not have the same effect. It took my quite a while to understand the custom_data was the reason terraform wanted to recreate my virtual machine.

ArcturusZhang commented 4 years ago

Hi @kev-in-shu the reason why terraform is not directly telling you that it is custom_data that makes terraform suppose we should recreate the VM is that custom_data is not only a ForceNew attribute, but also a sensitive attribute, and somehow the force-replace notification is overwritten by the (sensitive value) notification, which should be a terraform issue rather than a provider issue.

lovelinuxalot commented 3 years ago

Hi,

I still face the same issue. I already have a azurerm_linux_virtual_machine resource so the steps I followed are:

When I run terraform plan I have the changes only for custom_data.