Open JelleBroekhuijsen opened 1 year ago
Consider also hostname could cause a recreation. Seems that this module is based for only creating F5 instances but not manage lifecycle. Pheraps it is better to accept a map where you insert hostname and type of machine, because sometimes you want only to upgrade size of VM. Consider also extensions ("custom data") to manage with lifecycle because them are managed by DO or Cloud-Init.
For customisation, you could need to put lifecycle also on image type. Because if you use terraform you have to import whole configuration. Instead if you use normal procedure, you could only import the bin file, but terraform sees a change and wants to force replacement.
I am suggesting these changes to the module to stop redeployment of the VM because of 'known after apply' behavior in terraform.
We observed frequent destructive changes to the infra when rerunning identical code. We suspect this being due to certain values not being predictable by terraform forcing recreation of the VM.
Values that were marked as 'known after apply' include
resource_group_name
andlocation
, both of which were derived from the data sourceazurerm_resource_group.bigiprg
. This data source seemed to serve no purpose other than provide access to these mostly static values.Furthermore
admin_password
andcustom_data
also have similar behavior because they are 'sensitive values'. To prevent these properties from causing a redeployment of the VM I added these aslifecycle
properties withignore_changes
I do see some merit to having
custom_data
changes trigger a redeploy as the data in this block is not as static however it might be better to force a user to consciously remove the resource viaterraform destroy
or deletion of the VM; an update in place is not supported either way and there are more downsides to the unneeded redeployment that these fixes remediate than there are upsides to having modification tocustom_data
trigger an automated redeploy of the entire VM.