hashicorp / terraform-provider-azurerm

Terraform provider for Azure Resource Manager
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs
Mozilla Public License 2.0
4.61k stars 4.65k forks source link

Please allow data disks to be added to existing machines in inventory in-line without destroy/recreate #582

Closed jstewart612 closed 6 years ago

jstewart612 commented 7 years ago

The Azure Resource Manager control panel lets you attach a data disk without destroying the machine, so why can't Terraform?

nbering commented 7 years ago

I was able to do this smoothly in past releases, though I admit I probably haven't tried since Terraform 0.9.11. What version are you using? Can you give an example of the configuration, the specifics of how you changed it, and the terraform plan output (with secrets removed)? This not only helps with diagnostics, but also helps other users looking at this later to determine if the issue they're discussing is the same one you're facing.

jstewart612 commented 7 years ago

main.tf

# Subscription-wide values
variable "client_id"                    {}
variable "client_secret"                {}
variable "subscription_id"              {}
variable "tenant_id"                    {}

# Terraform Remote State values
variable "storage_account_name"         {}
variable "container_name"               {}
variable "key"                          {}

# Data Center and Environment Options
variable "location"                     {}
variable "resource_group_name"          {}

# Availability Set Options
variable "platform_update_domain_count" {}
variable "platform_fault_domain_count"  {}
variable "managed"                      {}

# Configure the Microsoft Azure Provider
provider "azurerm" {
  subscription_id = "${var.subscription_id}"
  client_id       = "${var.client_id}"
  client_secret   = "${var.client_secret}"
  tenant_id       = "${var.tenant_id}"
}

module "rentpath-appgw" {
  source = "modules/rentpath-appgw"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

#module "haproxy-int" {
#  source = "modules/haproxy-int"

  # Subscription-wide values
#  subscription_id     = "${var.subscription_id}"
#  client_id           = "${var.client_id}"
#  client_secret       = "${var.client_secret}"
#  tenant_id           = "${var.tenant_id}"
#  resource_group_name = "${var.resource_group_name}"
#  location            = "${var.location}"

  # Data Center and Environment Options
#  location            = "${var.location}"
#  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
#  platform_update_domain_count = "${var.platform_update_domain_count}"
#  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
#  managed                      = "${var.managed}"
#}

module "nsmaster" {
  source = "modules/nsmaster"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "nsquery" {
  source = "modules/nsquery"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "puppet-master" {
  source = "modules/puppet-master"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "nats-int" {
  source = "modules/nats-int"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "postfix" {
  source = "modules/postfix"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "ipa" {
  source = "modules/ipa"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "spacewalk" {
  source = "modules/spacewalk"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "mgmt" {
  source = "modules/mgmt"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "foreman" {
  source = "modules/foreman"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "puppet-db" {
  source = "modules/puppet-db"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "pgsql-infra" {
  source = "modules/pgsql-infra"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "puppet-ca" {
  source = "modules/puppet-ca"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "ag-webjs" {
  source = "modules/ag-webjs"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "nagios" {
  source = "modules/nagios"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "splunk-deploy" {
  source = "modules/splunk-deploy"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "splunk-forward" {
  source = "modules/splunk-forward"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "splunk-index" {
  source = "modules/splunk-index"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "splunk-master" {
  source = "modules/splunk-master"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "splunk-search" {
  source = "modules/splunk-search"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "repo" {
  source = "modules/repo"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "consul" {
  source = "modules/consul"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "statsd" {
  source = "modules/statsd"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

module "manageiq" {
  source = "modules/manageiq"

  # Subscription-wide values
  subscription_id     = "${var.subscription_id}"
  client_id           = "${var.client_id}"
  client_secret       = "${var.client_secret}"
  tenant_id           = "${var.tenant_id}"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"

  # Data Center and Environment Options
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  # Availability Set options
  platform_update_domain_count = "${var.platform_update_domain_count}"
  platform_fault_domain_count  = "${var.platform_fault_domain_count}"
  managed                      = "${var.managed}"
}

Now let's say I want to add a new disk to modules/manageiq/main.tf

It currently reads:

# Subscription wide variables - set in main.tf of parent environment branch
variable "client_id" {}
variable "client_secret" {}
variable "location" {}
variable "resource_group_name" {}
variable "subscription_id" {}
variable "tenant_id" {}

# Availability set variables - set in main.tf of parent environment branch
variable "platform_update_domain_count" {}
variable "platform_fault_domain_count" {}
variable "managed" {}

# Create Availability Set
resource "azurerm_availability_set" "ine2-as-manageiq" {
    name                         = "ine2-as-manageiq"
    location                     = "${var.location}"
    resource_group_name          = "${var.resource_group_name}"
    platform_update_domain_count = "${var.platform_update_domain_count}"
    platform_fault_domain_count  = "${var.platform_fault_domain_count}"
    managed                      = "${var.managed}"
}

# Create Azure Load Balancer
resource "azurerm_lb" "ine2-lb-manageiq" {
    name                = "ine2-lb-manageiq"
    location            = "${var.location}"
    resource_group_name = "${var.resource_group_name}"

    frontend_ip_configuration {
        name                          = "ine2-lb-manageiq-frontend"
        subnet_id                     = "/subscriptions/${var.subscription_id}/resourceGroups/${var.resource_group_name}/providers/Microsoft.Network/virtualNetworks/US_East2_${var.resource_group_name}_172.28.96.0-19/subnets/US_East_2_${var.resource_group_name}_Production_Management_VIP"
        private_ip_address_allocation = "static"
        private_ip_address            = "172.28.106.7"
    }

}

# Terraform does not yet let you make a backend pointing to an availability set
# These are placeholder blocks, commented out until it does
# https://github.com/terraform-providers/terraform-provider-azurerm/issues/63
#resource "azurerm_lb_backend_address_pool" "ine2-be-manageiq" {
#    resource_group_name = "${var.resource_group_name}"
#    loadbalancer_id     = "${azurerm_lb.ine2-lb-manageiq.id}"
#    name                = "ine2-be-manageiq"
#}

resource "azurerm_lb_probe" "ine2-pr-manageiq-443" {
    resource_group_name = "${var.resource_group_name}"
    loadbalancer_id     = "${azurerm_lb.ine2-lb-manageiq.id}"
    name                = "ine2-pr-manageiq-443"
    port                = 443
    protocol            = "Tcp"
    interval_in_seconds = 5
    number_of_probes    = 2
}

resource "azurerm_lb_rule" "ine2-ru-manageiq-443" {
    resource_group_name            = "${var.resource_group_name}"
    loadbalancer_id                = "${azurerm_lb.ine2-lb-manageiq.id}"
    name                           = "ine2-ru-manageiq-443"
    protocol                       = "Tcp"
    frontend_port                  = 443
    backend_port                   = 443
    frontend_ip_configuration_name = "${azurerm_lb.ine2-lb-manageiq.frontend_ip_configuration.0.name}"
    probe_id                       = "${azurerm_lb_probe.ine2-pr-manageiq-443.id}"
}

# Create network interface
resource "azurerm_network_interface" "ine2-ni-manageiq-eth0" {
    count               = 2
    name                = "ine2-ni-manageiq${format("%03d", count.index + 1)}-eth0"
    location            = "${var.location}"
    resource_group_name = "${var.resource_group_name}"

    ip_configuration {
        name                          = "ine2-ni-manageiq${format("%03d", count.index + 1)}-eth0-config"
        subnet_id                     = "/subscriptions/${var.subscription_id}/resourceGroups/${var.resource_group_name}/providers/Microsoft.Network/virtualNetworks/US_East2_${var.resource_group_name}_172.28.96.0-19/subnets/US_East_2_${var.resource_group_name}_Production_Management"
        private_ip_address_allocation = "static"
        private_ip_address            = "172.28.107.${150 + count.index}"
    }
}

# Create virtual machine
resource "azurerm_virtual_machine" "ine2-vm-manageiq" {
    count                 = 2
    name                  = "ine2-vm-manageiq${format("%03d", count.index + 1)}"
    location              = "${var.location}"
    resource_group_name   = "${var.resource_group_name}"
    network_interface_ids = ["/subscriptions/${var.subscription_id}/resourceGroups/${var.resource_group_name}/providers/Microsoft.Network/networkInterfaces/ine2-ni-manageiq${format("%03d", count.index + 1)}-eth0"]
    vm_size               = "Standard_f4s"
    availability_set_id   = "${azurerm_availability_set.ine2-as-manageiq.id}"

    delete_os_disk_on_termination = true
    delete_data_disks_on_termination = true

    storage_os_disk {
        name              = "ine2-di-manageiq${format("%03d", count.index + 1)}-os"
        caching           = "ReadWrite"
        create_option     = "FromImage"
        managed_disk_type = "Premium_LRS"
        os_type           = "linux"
    }

    storage_image_reference {
        id = "/subscriptions/${var.subscription_id}/resourceGroups/${var.resource_group_name}/providers/Microsoft.Compute/images/ine2-im-linux"
    }

    os_profile {
        computer_name  = "manageiq${format("%03d", count.index + 1)}.useast2.rentpath.com"
        admin_username = "rentpath"
        admin_password = "2864x%M78f3]%3}4*6f]k.C+"
    }

    os_profile_linux_config {
        disable_password_authentication = false
    }

    boot_diagnostics {
        enabled = "true"
        storage_uri = "https://linuxuseast2.blob.core.windows.net/"
    }

    tags {
        foreman_group_id = "42"
    }

}

Let's now add the following section:

    storage_data_disk {
        name              = "ine2-di-manageiq${format("%03d", count.index + 1)}-data0"
        caching           = "ReadWrite"
        create_option     = "Empty"
        managed_disk_type = "Premium_LRS"
        lun               = 1
        disk_size_gb      = 250
    }

The terraform plan now reads as follows:

...
-/+ module.manageiq.azurerm_virtual_machine.ine2-vm-manageiq[0] (new resource required)
      id:                                                                 "/subscriptions/.../resourceGroups/Linux/providers/Microsoft.Compute/virtualMachines/ine2-vm-manageiq001" => <computed> (forces new resource)
...
      storage_data_disk.#:                                                "0" => "1"
      storage_data_disk.0.caching:                                        "" => "ReadWrite"
      storage_data_disk.0.create_option:                                  "" => "Empty" (forces new resource)
      storage_data_disk.0.disk_size_gb:                                   "" => "250"
      storage_data_disk.0.lun:                                            "" => "1"
      storage_data_disk.0.managed_disk_id:                                "" => <computed>
      storage_data_disk.0.managed_disk_type:                              "" => "Premium_LRS"
      storage_data_disk.0.name:                                           "" => "ine2-di-manageiq001-data0"
...

Here are my versions

jstewart@mgmt001 ~/terraform [useast2.rentpath] $ terraform -version
Terraform v0.11.0
+ provider.azurerm v0.3.2

jstewart@mgmt001 ~/terraform [useast2.rentpath] $

So, have we checked all the boxes? Terraform wants to destroy and recreate entire instances when I add data disks. Why? Azure Resource Manager doesn't force this on me.

tombuildsstuff commented 6 years ago

hey @jstewart612

Thanks for opening this issue

To provide an update here - digging into this, this change came from #218 - which changed this field to ForceNew given that Azure will return an error if attempting to change this on an existing disk.

Whilst that solution worked for that use-case, it's clearly not ideal and we need a better solution for this field. We should be able to error only when Azure says it's invalid (as in the example below) - but that requires some time/thought.. until that time perhaps it's worth removing ForceNew from this field (given Azure's returning the error anyway)? In either case this needs some investigation to determine how best to proceed IMO.

Thanks!

nbering commented 6 years ago

I guess the workaround if you remove ForceNew would be to manually taint the resource if you encounter a change that Azure refuses to apply?

jstewart612 commented 6 years ago

Oh, I see.... I just read #240 . This happens because the API started throwing up an error at you. So weird of a provider to have their GUI behave differently and hide a deficiency of their API.... or maybe not ;)

Interested to see how this will turn out. Thanks for the updates @tombuildsstuff and @nbering !

tombuildsstuff commented 6 years ago

I guess the workaround if you remove ForceNew would be to manually taint the resource if you encounter a change that Azure refuses to apply?

@nbering Probably - however I think we should try and identify and detail those workflows on the VM Resource Page, rather than leaving it open-ended.. what do you think?

nbering commented 6 years ago

Ya... that's was my thought when I saw your proposal to remove ForceNew. That unfortunately leaves some people in a state where it becomes difficult to know what to do in order to recover from the failed apply.

tanner-bruce commented 6 years ago

Forgive my ignorance but could this somehow be done similar to the AWS way where there is an aws_volume_attachment option? This would also allow us to not have to specify a ton of storage_data_disk blocks (unless there is a way around that I haven't found yet?) and the UI implies to me this is possible.

nbering commented 6 years ago

@tanner-bruce As far as I know, the volume_attachment resource for AWS actually maps to an entity in the AWS API. Whereas in Azure the disks attached to a VM are a property of the VM. One could maybe create such a resource, but it would be a construct of the Terraform provider - not the Azure API. This can cause weird inconsistencies in behaviour.

Just my take - but I'd guess that it might not work because - for example - if you want to change the Blob Storage URL of an unmanaged disk, that's actually a ForceNew action on the VM. If the property is on a fabricated extra resource, Terraform Core wouldn't know the VM needs to be recreated for that apply.

ms1111 commented 6 years ago

Did anyone find a workaround to add an unmanaged storage volume to an existing VM without blowing away the VM?

      storage_data_disk.2.create_option:                                "" => "Empty" (forces new resource)
tanner-bruce commented 6 years ago

Your only option is to add them beside the VM, and manually attach them. The volume_attachment construct (or any alternative, really) is dearly needed, having to recreate a VM to add a disk is ridiculous.

nonsense commented 6 years ago

@tanner-bruce if you do that and try to increase the count of a specific VM resource type, Terraform marks the existing VMs for deletion (as now they have disks attached to them). Any workaround to fix that?

jzampieron commented 6 years ago

This is actually really bad b/c you can't even just force terraform to create the disks and then attach them out-of-band using the Azure portal b/c the Azure portal uses the upper case name for the resource group and then all the ids will never match.

IMHO that's just the Azure portal being broken and I'll raise a ticket with MSFT about it b/c it's not reflective of how the API returns the resource group name.

jzampieron commented 6 years ago

I've opened a PR with a small change to at least allow folks to work around the issue by creating the disks using a azurerm_managed_disk and using the Attach option to attach it to the VM.

Essentially, the workflow (less than ideal, but workable) is to create your plan restricted to the creation of the azurerm_managed_disk resource... and then once terraform has created the disks, you use the Azure portal to attach them.

This is not a long term solution, but it does work.

Note that for this to work you must attach the disks in the portal in the same order as you declare them in the terraform code. I recommend ordering them by LUN in ascending order for clarity.

jzampieron commented 6 years ago

Regardless of the long-term solution here, the PR code change to set DiffSuppressFunc: ignoreCaseDiffSuppressFunc, is correct anyway b/c Azure can return a different case for the resourceGroup section of the managed_disk_id URL.

Azure (and the Azure RM portal) appears to treat them as case-insensitive and the azurerm provider should as well.

jzampieron commented 6 years ago

Another interesting tidbit ... and I have no idea the proper place to document this ... is that changing the cache setting on a data disk is a disruptive operation. It causes the VM to lose access to the disk for some period of time. It's almost like an detach/attach operation, but it's hard to tell.

VaijanathB commented 6 years ago

This is being fixed in this PR https://github.com/terraform-providers/terraform-provider-azurerm/pull/813

achandmsft commented 6 years ago

@VaijanathB As this issue is fixed in https://github.com/terraform-providers/terraform-provider-azurerm/pull/813, could you please verify and close this issue? @jstewart612, This should be fixed in v1.1.2 of the provider.

achandmsft commented 6 years ago

Verified that this is closed. @jstewart612 please confirm, else reopen.

jstewart612 commented 6 years ago

Works like a charm.... thank you all for pushing through on this!

ghost commented 4 years ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!