josenk / terraform-provider-esxi

Terraform-provider-esxi plugin
GNU General Public License v3.0
544 stars 154 forks source link

newbie cloning issue - all VMs start blank #117

Closed bish0polis closed 4 years ago

bish0polis commented 4 years ago

Hey guys,

This, I expect, will be an easy thing; and I promise I've agonized over this as much as my pride will allow before reaching out. The problem, simply enough, is that when I instantiate a new VM and request it clone from an existing dark VM's disk, the new VM still starts blank. According to the docs and common sense, I'm getting something wrong.

I'll append (what I think are the relevant parts of) my config below. Can you spot what I've missed?

Ask questions, heckle where it's funny, and help me out of this pit o' mild despair if you can? :-D

variable "lakevms" {
  type = map(object({
    clone_from_vm   = string
    boot_disk_size  = string
    disk_store      = string
    memsize         = string
    numvcpus        = string
    virtual_network = string
    run_list        = list(string)
    tags            = map(string)

  }))
  default = {
    "reflect" = {
      clone_from_vm   = "c7"
      boot_disk_size  = "20"
      disk_store      = "cloud2-ssd-3t-1"
      memsize         = "2048"
      numvcpus        = "2"
      virtual_network = "VM Network"
      run_list        = ["lin-base::_yum_repos", "lin-base", "snmp"]
      tags            = {}
    },

  }
}

resource "esxi_guest" "lake" {
  for_each = var.lakevms

  guest_name         = each.key
  disk_store         = each.value.disk_store
  power              = "on"

  memsize            = each.value.memsize
  numvcpus           = each.value.numvcpus
  resource_pool_name = "/"

  clone_from_vm      = each.value.clone_from_vm

  boot_disk_type     = "thin"
  boot_disk_size     = each.value.boot_disk_size

  network_interfaces {
    virtual_network  = "VM Network"
    nic_type         = "vmxnet3"
  }

#  guest_startup_timeout  = 45
#  guest_shutdown_timeout = 30

  connection {
    host             = self.ip_address
    type             = "ssh"
    user             = var.ssh_admin_username
    password         = var.ssh_admin_password
    timeout          = "180s"
  }

  provisioner "remote-exec" {
    inline = [
      "echo success",  #  chef will be swapped in later
    ]
  }

}
josenk commented 4 years ago

Have you looked at the examples? Source vm (c7) must be powered off. I assume c7 is bootable and working...

What version of ovftool is installed?

Maybe Post the output of terrafom plan and apply.

bish0polis commented 4 years ago

The c7 'source' machine is dark, yeah. And it's bootable ... I think. But I'm going to check that.

The debug output was key, as I was able to see that it couldn't RUN the ovftool even though it found it. I wasn't seeing an error - it would just stop, I guess - and it would start up with no copied disks.

I'm seeing something else, and I'll open another ticket for that. It's happily starting a copy, but dying at the 17% mark (no, disks aren't full). But I'll check the fidelity of that c7 VM and ensure it's not somehow bogus too.

Thanks for the help and - it seems silly now - reminding me to get a debug log and pore over that.

SO:

(these things could be solved by an RPM, VMware; just sayin)