Open segator opened 5 years ago
Would most likely not solve the actual problem here, deleting the VM. I've only seen this happen when the VM it tries to remove is powered on (pretty sure VMware disallows that), and since those marked as templates can't be powered on it won't affect those. Also, I have seen a few number of cases where something does go wrong (mainly provisioning part) but Packer will still consider it a successful build. While you will have a template to clone from it won't be the "correct" one since it didn't run everything it needed to do. This means that you should have some form of "versioning" of these VMs/templates going on outside of Packer.
Personally I though I wouldn't mind an extra option for Packer to make sure the VM is powered off before recreating it. To me this would be useful when creating Linux templates since sometimes there is (I guess) a timing issue with the boot-command and that one errors out leaving the VM in a powered on state and me unable to re-create it (have it set to automatically re-run job a few times if Packer exists with error).
December 2019 and still relevant:
$ packer build -var-file $PACKER_VARFILE -var vSphere_user=$VSPHERE_USER -var vSphere_password=$VSPHERE_PASSWORD -var vm_name=$VM_NAME -var disk_size=$VM_DISK_SIZE -var vSphere_folder=DEV/BareTemplates -force -only=vsphere-iso ./windows.packer.json
vsphere-iso output will be in this color.
==> vsphere-iso: the vm/template Win2008R2-50G-DEV already exists, but deleting it due to -force flag
==> vsphere-iso: Creating VM...
Build 'vsphere-iso' errored: error creating vm: The name 'Win2008R2-50G-DEV' already exists.
==> Some builds didn't complete successfully and had errors:
--> vsphere-iso: error creating vm: The name 'Win2008R2-50G-DEV' already exists.
==> Builds finished but no artifacts were created.
This happens only when the VM is powered on (a usual case if you have a CI runner and some jobs get aborted on frequent commits and such).
Creating time-stamped templates is not a solution for me, as I want a fully automated pipeline, and I don't have infinite disk space for broken VMs. And if we explicitly tell that we do want to destroy an existing machine, why bother about it being powered on or off?
@CosmoMyzrailGorynych As far as I can tell it's a limitation of VMware's API in that it won't allow you to remove it if it's powered on. Though doing a forced power off shouldn't be that big of a task.
I have the latest version of this plugin.
When add -force to override the VM, doesn't works.
vsphere-iso output will be in this color.
Anyway packer as environment Builder I think force the VM override in a first step it's not correct.
The way to go should be something like
Create the VM with a random name VM-$timestamp
Then on the post-processor rename to final name and force override(if already exist) because imagine the case you are using a template as production but in a packer build something goes wrong a fail, then the VM/Template is destroyed.
If we do it like I say it's safer because you only are going to replace it in the end when you know that build was OK