josenk / terraform-provider-esxi

Terraform-provider-esxi plugin
GNU General Public License v3.0
540 stars 154 forks source link

for_each clobbers esxi guest #127

Closed p0rkjello closed 3 years ago

p0rkjello commented 3 years ago

I am attempting to create three esxi guests. The plan appears to be ok but apply produces one working system.

main.tf

locals {
  guests = {
    "k3s_server_0" = { mac_address = "00:50:56:a3:b1:c2" },
    "k3s_worker_1" = { mac_address = "00:50:56:a3:b1:c3" },
    "k3s_worker_2" = { mac_address = "00:50:56:a3:b1:c4" }
  }
}

resource "esxi_guest" "guests" {
  for_each   = local.guests

  guest_name = each.key
  disk_store = var.disk_store
  guestos    = "ubuntu-64"

  boot_disk_type = "thin"
  boot_disk_size = "35"

  memsize            = "1024"
  numvcpus           = "2"
  power              = "on"

  network_interfaces {
    virtual_network = var.vm_network
    mac_address     = each.value.mac_address
    nic_type        = "e1000"
  }
}

plan

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # esxi_guest.guests["k3s_server_0"] will be created
  + resource "esxi_guest" "guests" {
      + boot_disk_size         = "35"
      + boot_disk_type         = "thin"
      + disk_store             = "SSD_01"
      + guest_name             = "k3s_server_0"
      + guest_shutdown_timeout = 30
      + guest_startup_timeout  = 45
      + guestos                = "ubuntu-64"
      + id                     = (known after apply)
      + ip_address             = (known after apply)
      + memsize                = "1024"
      + notes                  = (known after apply)
      + numvcpus               = "2"
      + ovf_properties_timer   = (known after apply)
      + ovf_source             = "https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.ova"
      + power                  = "on"
      + resource_pool_name     = "/"
      + virthwver              = (known after apply)

      + network_interfaces {
          + mac_address     = "00:50:56:a3:b1:c2"
          + nic_type        = "e1000"
          + virtual_network = "LAN"
        }

  # esxi_guest.guests["k3s_worker_1"] will be created
  + resource "esxi_guest" "guests" {
      + boot_disk_size         = "35"
      + boot_disk_type         = "thin"
      + disk_store             = "SSD_01"
      + guest_name             = "k3s_worker_1"
      + guest_shutdown_timeout = 30
      + guest_startup_timeout  = 45
      + guestos                = "ubuntu-64"
      + id                     = (known after apply)
      + ip_address             = (known after apply)
      + memsize                = "1024"
      + notes                  = (known after apply)
      + numvcpus               = "2"
      + ovf_properties_timer   = (known after apply)
      + ovf_source             = "https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.ova"
      + power                  = "on"
      + resource_pool_name     = "/"
      + virthwver              = (known after apply)

      + network_interfaces {
          + mac_address     = "00:50:56:a3:b1:c3"
          + nic_type        = "e1000"
          + virtual_network = "LAN"
        }

  # esxi_guest.guests["k3s_worker_2"] will be created
  + resource "esxi_guest" "guests" {
      + boot_disk_size         = "35"
      + boot_disk_type         = "thin"
      + disk_store             = "SSD_01"
      + guest_name             = "k3s_worker_2"
      + guest_shutdown_timeout = 30
      + guest_startup_timeout  = 45
      + guestos                = "ubuntu-64"
      + id                     = (known after apply)
      + ip_address             = (known after apply)
      + memsize                = "1024"
      + notes                  = (known after apply)
      + numvcpus               = "2"
      + ovf_properties_timer   = (known after apply)
      + ovf_source             = "https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.ova"
      + power                  = "on"
      + resource_pool_name     = "/"
      + virthwver              = (known after apply)

      + network_interfaces {
          + mac_address     = "00:50:56:a3:b1:c4"
          + nic_type        = "e1000"
          + virtual_network = "LAN"
        }
    }

Plan: 3 to add, 0 to change, 0 to destroy.

apply

Plan: 3 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

esxi_guest.guests["k3s_worker_2"]: Creating...
esxi_guest.guests["k3s_server_0"]: Creating...
esxi_guest.guests["k3s_worker_1"]: Creating...

...

Error: Fault cause: vmodl.fault.ManagedObjectNotFound

Error: Reached maximum wait time of 5 minutes when aborting.
Completed with errors

exit status 1

  on main.tf line 32, in resource "esxi_guest" "guests":
  32: resource "esxi_guest" "guests" {

esxi_twoworkers

esxi_datastore

josenk commented 3 years ago

I wasn't able to re-create your issue. BTW: The supplied main.tf doesn't seem complete... For example, it's missing the ovf_source that is set. Maybe there's other things in there that's causing your issue???

However, maybe this is your issue???? Notice you have a k3s_worker_2 & k3s_worker_2_1... ovftool will sometimes do this when the directory already exists with an existing vm in it. If it's not needed, you should remove or rename that directory. Or put your new VMS on a different Disk Store.

Otherwise, you may want to enable Terraform debugging export TF_LOG=DEBUG, then run your apply again.

p0rkjello commented 3 years ago

It is writing over the same directory in the datastore. I removed the directory completely after each attempt. Here are the full configs.

variables.tf

#
#  See https://www.terraform.io/intro/getting-started/variables.html for more details.
#

#  Change these defaults to fit your needs!

variable "esxi_hostname" { default = "esxi" }

variable "esxi_hostport" { default = "22" }

variable "esxi_hostssl" { default = "443" }

variable "esxi_username" { default = "root" }
variable "esxi_password" {} # Unspecified will prompt
variable "vm_network" { default = "LAN" }
variable "disk_store" { default = "ssd_01" }

variable "ovf_file" {
  description = "ovf files or URLs to use as a source. Mutually exclusive with clone_from_vm option."
  type        = string
  default     = "https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.ova"
}

main.tf

#########################################
#  ESXI Provider host/login details
#########################################

provider "esxi" {
  esxi_hostname  = var.esxi_hostname
  esxi_hostport  = var.esxi_hostport
  esxi_hostssl   = var.esxi_hostssl
  esxi_username  = var.esxi_username
  esxi_password  = var.esxi_password
}

# Template for initial configuration bash script
# template_file is a great way to pass variables to cloud-init

data "template_file" "userdata_default" {
  template = file("userdata.tpl")
}

#########################################
#  ESXI Guest resource
#########################################

locals {
  guests = {
    "k3s_server_0" = { mac_address = "00:50:56:a3:b1:c2" },
    "k3s_worker_1" = { mac_address = "00:50:56:a3:b1:c3" },
    "k3s_worker_2" = { mac_address = "00:50:56:a3:b1:c4" }
  }
}

resource "esxi_guest" "guests" {
  for_each   = local.guests

  guest_name = each.key
  disk_store = var.disk_store
  guestos    = "ubuntu-64"

  boot_disk_type = "thin"
  boot_disk_size = "35"

  memsize            = "1024"
  numvcpus           = "2"
  power              = "on"

  guestinfo = {
    "userdata.encoding" = "gzip+base64"
    "userdata"          = base64gzip(data.template_file.userdata_default.rendered)
  }

  #  Specify an ovf file to use as a source.
  ovf_source = var.ovf_file

  ovf_properties {
    key = "password"
    value = "labadmin"
  }

  ovf_properties {
    key = "hostname"
    value = each.key
  }

  ovf_properties {
    key = "user-data"
    value = base64encode(data.template_file.userdata_default.rendered)
  }

  network_interfaces {
    virtual_network = var.vm_network
    mac_address     = each.value.mac_address
    nic_type        = "e1000"
  }

  guest_startup_timeout  = 45
  guest_shutdown_timeout = 30
}

userdata.tpl

#cloud-config

users:
  - name: p0rkjello
    gecos: p0rkjello
    sudo: ALL=(ALL) NOPASSWD:ALL
    groups: users, admin
    ssh_import_id: None
    lock_passwd: false
    ssh_authorized_keys:
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCaoRdDjNzfXQIPzX+iEOx5Yf95QzpTTf4sA9oaQAFYbxb0TTf6Fe6cA00QhgtjTTuXGoANXM6Xo53I+GM9xQwicCZXBA7rQmm10XJHuN8TMmhEy1BjeVmT9rUYt6R4h6MVjAhgi8xpnzoxtb4+YFCCNLeNGfihI6e0k4UzmpVHE03znu4ga7tGqiDAl+PRZJtzOOqI57/MQeBx/6Buft+AKixSGMJ3pudffTln5UB8SbFLM4XqxcMMBLgidlcy1oNkksWmh3koP1IEt6l0y1RE+cAZg9x/is7u3n3I7EAbl5wqnLeFGe5h+HMd9ruDG56PTbA2lzkZ5D86EFLA3nT6Ycq1w+WPLzufCGgW/6kzosPtItRtVp0qGeXzZSE98rM7uMoal8Y/ZzdxFbCnbkWTqGRhT4jPu9FfA+74XVnydhg2WSaL9B5xg26Dj0GnTPq/gPzzslspJBcW6lqpzyuUNirqn5AFlwFj0O4gtZjWiL4YBDEmQGWTEZZtVKKl7y8= p0rkjello@x18

packages:
 - ntp
 - ntpdate
 - curl

# Override ntp with chrony configuration on Ubuntu
ntp:
  enabled: true
  ntp_client: chrony  # Uses cloud-init default chrony configuration

runcmd:
  - date >/root/cloudinit.log
  - echo 'HELLO' >>/root/cloudinit.log
  - echo "Done cloud-init" >>/root/cloudinit.log
p0rkjello commented 3 years ago

Is it not possible to create multiple guests?

I have removed the for_each and created three resource blocks. When run it does the same thing as above. It generates VM guests with the same name and tries to write to the same directory in the datastore.

josenk commented 3 years ago

Try starting with the most simple build, then work yourself up with all the options.... Here is a bare-bones example.

main.tf

provider "esxi" {
  esxi_hostname = var.esxi_hostname
  esxi_username = var.esxi_username
  esxi_password = var.esxi_password
}

locals {
  guests = {
    "k3s_server_0" = { mac_address = "00:50:56:a3:b1:c2" },
    "k3s_worker_1" = { mac_address = "00:50:56:a3:b1:c3" },
    "k3s_worker_2" = { mac_address = "00:50:56:a3:b1:c4" }
  }
}

resource "esxi_guest" "vmtest01" {
  for_each   = local.guests

  guest_name = each.key
  disk_store = var.disk_store
  network_interfaces {
    virtual_network = var.vm_network
    mac_address = each.value.mac_address
  }
}
[root@q   terraform-test]# terraform apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # esxi_guest.vmtest01["k3s_server_0"] will be created
  + resource "esxi_guest" "vmtest01" {
      + boot_disk_size         = (known after apply)
      + boot_disk_type         = "thin"
      + disk_store             = "DS_4TB"
      + guest_name             = "k3s_server_0"
      + guest_shutdown_timeout = (known after apply)
      + guest_startup_timeout  = (known after apply)
      + guestos                = (known after apply)
      + id                     = (known after apply)
      + ip_address             = (known after apply)
      + memsize                = (known after apply)
      + notes                  = (known after apply)
      + numvcpus               = (known after apply)
      + ovf_properties_timer   = (known after apply)
      + power                  = "on"
      + resource_pool_name     = (known after apply)
      + virthwver              = (known after apply)

      + network_interfaces {
          + mac_address     = "00:50:56:a3:b1:c2"
          + nic_type        = (known after apply)
          + virtual_network = "192.168.1"
        }
    }

  # esxi_guest.vmtest01["k3s_worker_1"] will be created
  + resource "esxi_guest" "vmtest01" {
      + boot_disk_size         = (known after apply)
      + boot_disk_type         = "thin"
      + disk_store             = "DS_4TB"
      + guest_name             = "k3s_worker_1"
      + guest_shutdown_timeout = (known after apply)
      + guest_startup_timeout  = (known after apply)
      + guestos                = (known after apply)
      + id                     = (known after apply)
      + ip_address             = (known after apply)
      + memsize                = (known after apply)
      + notes                  = (known after apply)
      + numvcpus               = (known after apply)
      + ovf_properties_timer   = (known after apply)
      + power                  = "on"
      + resource_pool_name     = (known after apply)
      + virthwver              = (known after apply)

      + network_interfaces {
          + mac_address     = "00:50:56:a3:b1:c3"
          + nic_type        = (known after apply)
          + virtual_network = "192.168.1"
        }
    }

  # esxi_guest.vmtest01["k3s_worker_2"] will be created
  + resource "esxi_guest" "vmtest01" {
      + boot_disk_size         = (known after apply)
      + boot_disk_type         = "thin"
      + disk_store             = "DS_4TB"
      + guest_name             = "k3s_worker_2"
      + guest_shutdown_timeout = (known after apply)
      + guest_startup_timeout  = (known after apply)
      + guestos                = (known after apply)
      + id                     = (known after apply)
      + ip_address             = (known after apply)
      + memsize                = (known after apply)
      + notes                  = (known after apply)
      + numvcpus               = (known after apply)
      + ovf_properties_timer   = (known after apply)
      + power                  = "on"
      + resource_pool_name     = (known after apply)
      + virthwver              = (known after apply)

      + network_interfaces {
          + mac_address     = "00:50:56:a3:b1:c4"
          + nic_type        = (known after apply)
          + virtual_network = "192.168.1"
        }
    }

Plan: 3 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

esxi_guest.vmtest01["k3s_worker_1"]: Creating...
esxi_guest.vmtest01["k3s_worker_2"]: Creating...
esxi_guest.vmtest01["k3s_server_0"]: Creating...
esxi_guest.vmtest01["k3s_worker_1"]: Still creating... [10s elapsed]
esxi_guest.vmtest01["k3s_worker_2"]: Still creating... [10s elapsed]
esxi_guest.vmtest01["k3s_server_0"]: Still creating... [10s elapsed]
esxi_guest.vmtest01["k3s_worker_1"]: Still creating... [20s elapsed]
esxi_guest.vmtest01["k3s_worker_2"]: Still creating... [20s elapsed]
esxi_guest.vmtest01["k3s_server_0"]: Still creating... [20s elapsed]
esxi_guest.vmtest01["k3s_worker_1"]: Still creating... [30s elapsed]
esxi_guest.vmtest01["k3s_worker_2"]: Still creating... [30s elapsed]
esxi_guest.vmtest01["k3s_server_0"]: Still creating... [30s elapsed]
esxi_guest.vmtest01["k3s_worker_1"]: Still creating... [40s elapsed]
esxi_guest.vmtest01["k3s_worker_2"]: Still creating... [40s elapsed]
esxi_guest.vmtest01["k3s_server_0"]: Still creating... [40s elapsed]
esxi_guest.vmtest01["k3s_worker_1"]: Still creating... [50s elapsed]
esxi_guest.vmtest01["k3s_worker_2"]: Still creating... [50s elapsed]
esxi_guest.vmtest01["k3s_server_0"]: Still creating... [50s elapsed]
esxi_guest.vmtest01["k3s_worker_1"]: Still creating... [1m0s elapsed]
esxi_guest.vmtest01["k3s_worker_2"]: Still creating... [1m0s elapsed]
esxi_guest.vmtest01["k3s_server_0"]: Still creating... [1m0s elapsed]
esxi_guest.vmtest01["k3s_worker_1"]: Still creating... [1m10s elapsed]
esxi_guest.vmtest01["k3s_worker_2"]: Still creating... [1m10s elapsed]
esxi_guest.vmtest01["k3s_server_0"]: Still creating... [1m10s elapsed]
esxi_guest.vmtest01["k3s_worker_1"]: Still creating... [1m20s elapsed]
esxi_guest.vmtest01["k3s_worker_2"]: Still creating... [1m20s elapsed]
esxi_guest.vmtest01["k3s_server_0"]: Still creating... [1m20s elapsed]
esxi_guest.vmtest01["k3s_worker_1"]: Still creating... [1m30s elapsed]
esxi_guest.vmtest01["k3s_worker_2"]: Still creating... [1m30s elapsed]
esxi_guest.vmtest01["k3s_server_0"]: Still creating... [1m30s elapsed]
esxi_guest.vmtest01["k3s_worker_1"]: Still creating... [1m40s elapsed]
esxi_guest.vmtest01["k3s_worker_2"]: Still creating... [1m40s elapsed]
esxi_guest.vmtest01["k3s_server_0"]: Still creating... [1m40s elapsed]
esxi_guest.vmtest01["k3s_worker_1"]: Still creating... [1m50s elapsed]
esxi_guest.vmtest01["k3s_worker_2"]: Still creating... [1m50s elapsed]
esxi_guest.vmtest01["k3s_server_0"]: Still creating... [1m50s elapsed]
esxi_guest.vmtest01["k3s_worker_1"]: Still creating... [2m0s elapsed]
esxi_guest.vmtest01["k3s_worker_2"]: Still creating... [2m0s elapsed]
esxi_guest.vmtest01["k3s_server_0"]: Still creating... [2m0s elapsed]
esxi_guest.vmtest01["k3s_worker_1"]: Still creating... [2m10s elapsed]
esxi_guest.vmtest01["k3s_worker_2"]: Still creating... [2m10s elapsed]
esxi_guest.vmtest01["k3s_server_0"]: Still creating... [2m10s elapsed]
esxi_guest.vmtest01["k3s_worker_1"]: Still creating... [2m20s elapsed]
esxi_guest.vmtest01["k3s_worker_2"]: Still creating... [2m20s elapsed]
esxi_guest.vmtest01["k3s_server_0"]: Still creating... [2m20s elapsed]
esxi_guest.vmtest01["k3s_worker_1"]: Creation complete after 2m25s [id=20]
esxi_guest.vmtest01["k3s_worker_2"]: Creation complete after 2m26s [id=19]
esxi_guest.vmtest01["k3s_server_0"]: Creation complete after 2m26s [id=21]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
[root@q   terraform-test]# terraform show
# esxi_guest.vmtest01["k3s_server_0"]:
resource "esxi_guest" "vmtest01" {
    boot_disk_type         = "thin"
    disk_store             = "DS_4TB"
    guest_name             = "k3s_server_0"
    guest_shutdown_timeout = 20
    guest_startup_timeout  = 120
    guestos                = "centos-64"
    id                     = "21"
    memsize                = "512"
    numvcpus               = "1"
    ovf_properties_timer   = 0
    power                  = "on"
    resource_pool_name     = "/"
    virthwver              = "8"

    network_interfaces {
        mac_address     = "00:50:56:a3:b1:c2"
        nic_type        = "e1000"
        virtual_network = "192.168.1"
    }
}

# esxi_guest.vmtest01["k3s_worker_1"]:
resource "esxi_guest" "vmtest01" {
    boot_disk_type         = "thin"
    disk_store             = "DS_4TB"
    guest_name             = "k3s_worker_1"
    guest_shutdown_timeout = 20
    guest_startup_timeout  = 120
    guestos                = "centos-64"
    id                     = "20"
    memsize                = "512"
    numvcpus               = "1"
    ovf_properties_timer   = 0
    power                  = "on"
    resource_pool_name     = "/"
    virthwver              = "8"

    network_interfaces {
        mac_address     = "00:50:56:a3:b1:c3"
        nic_type        = "e1000"
        virtual_network = "192.168.1"
    }
}

# esxi_guest.vmtest01["k3s_worker_2"]:
resource "esxi_guest" "vmtest01" {
    boot_disk_type         = "thin"
    disk_store             = "DS_4TB"
    guest_name             = "k3s_worker_2"
    guest_shutdown_timeout = 20
    guest_startup_timeout  = 120
    guestos                = "centos-64"
    id                     = "19"
    memsize                = "512"
    numvcpus               = "1"
    ovf_properties_timer   = 0
    power                  = "on"
    resource_pool_name     = "/"
    virthwver              = "8"

    network_interfaces {
        mac_address     = "00:50:56:a3:b1:c4"
        nic_type        = "e1000"
        virtual_network = "192.168.1"
    }
}
p0rkjello commented 3 years ago

I saw that you are running from a Linux system so I applied my terraform plan from a Ubuntu server. This worked correctly without any issue. The same plan fails when run from Windows 10.

Both systems have ovftool 4.0.1 and Terraform v0.13.4

I would venture a guess that the Windows batch file gets overwritten and pushes the same batch for each instance. This is why the VM guest names and DataStore directory are duplicates.

josenk commented 3 years ago

OK, that makes sense... I'll fix that in the next release.

josenk commented 3 years ago

Should be fixed in latest release.

https://github.com/josenk/terraform-provider-esxi/releases/tag/v1.8.1

josenk commented 3 years ago

BTW: I believe golang is lazy to cleanup it's closed files. I tried many methods, but I could not delete the temp batch file that is created... If you use this heavily, you might want to cleanup your temp directory (Windows only) every once in a while. They are very small files.