Open brownoxford opened 4 years ago
Hi @brownoxford
Thanks for opening this issue!
In Azure, the custom_data
of a VM only takes effect during the VM's creation, it would never be re-invoke again, and as a consequence, Azure does not return the custom_data
of a VM after the provision of a VM succeeded. Another consideration is that the custom_data
may possibly contain sensitive data, therefore Azure will not return it.
Since Azure would not return the custom_data
, terraform can do nothing when you are importing a VM into state but can only leave it empty. The custom_data
is also a ForceNew
attribute, therefore you get the situation you described in this issue.
Based on the nature of custom_data
and Azure's behaviour on this, there is really not much we can do from the provider side. To solve your situation, you could either add a
lifecycle {
ignore_changes = [
custom_data
]
}
to let terraform ignore the changes on custom_data
, or you could manually modify the state file to add the custom_data
back in. Similar situation would also happen to some password attributes, if Azure does not return those attributes, you will run into the same situation as this.
Hi,
If the custom_data is a ForceNew attribute, I would have expected this to be marked in the plan with "# forces replacement".
Especially because in the previous "azurerm_virtual_machine" resource, changes to the custom_data did not have the same effect. It took my quite a while to understand the custom_data was the reason terraform wanted to recreate my virtual machine.
Hi @kev-in-shu the reason why terraform is not directly telling you that it is custom_data
that makes terraform suppose we should recreate the VM is that custom_data
is not only a ForceNew
attribute, but also a sensitive attribute, and somehow the force-replace notification is overwritten by the (sensitive value)
notification, which should be a terraform issue rather than a provider issue.
Hi,
I still face the same issue. I already have a azurerm_linux_virtual_machine
resource so the steps I followed are:
linux_vm
linux_vm
When I run terraform plan
I have the changes only for custom_data
.
Community Note
Terraform (and AzureRM Provider) Version
Terraform v0.12.17
Affected Resource(s)
azurerm_virtual_machine
azurerm_linux_virtual_machine
Terraform Configuration Files
Old Config
New Config
Debug Output
Panic Output
Expected Behavior
I expected
terraform import
to import the existing configuration fully so that I would be able to migrate from legacyazurerm_virtual_machine
toazurerm_linux_virtual_machine
without having to destroy and re-create vm instances.Actual Behavior
The
terraform import
does not seem to recognize existingos_profile.custom data
and subsequent plan or apply viewscustom_data
as being new and triggers a destroy/recreate.Steps to Reproduce
azurerm_virtual_machine
with custom data specified inos_profile.custom_data
.azurerm_linux_virtual_machine
configuration using values translated from existingazurerm_virtual_machine
terraform state rm <your old azurerm_virtual_machine>
azurerm_linux_virtual_machine
withterraform import ...
terraform plan
Important Factoids
N/A
References
N/A