hashicorp / packer

Packer is a tool for creating identical machine images for multiple platforms from a single source configuration.
http://www.packer.io
Other
15.09k stars 3.33k forks source link

packer 1.3.x "bios.hddorder" in vmx causing trouble with ovftool #6742

Closed as-dg closed 5 years ago

as-dg commented 6 years ago

Hi,

I've been using packer successfully up until version 1.2.x. Since packer version 1.3.0 there appears to be a change causing issues in my environment.

Host platforms on which I run packer: macOS (10.12.6): VMware Fusion Professional Version 8.5.10 (7527438) Ubuntu 16.04.2 LTS (4.4.0-62-generic): VMware Workstation Pro 14.1.1.7528167

So I build my virtual machines using the vmware-iso packer builder on those systems. Once completed, I deploy them to a vCenter/vSphere/ESXi environment via ovftool: ovftool --name="my-machine" --datastore="myDataStore" myVirtualMachine.vmx "vi://user@domain.local@vcenter-01.domain.local/Datacenter/host/Cluster"

This would then usually proceed and display something like:

Opening VMX source: /path/myVirtualMachine.vmx
Opening VI target: vi://user%domain.local@vcenter-01.domain.local:443/Datacenter/host/Cluster
Deploying to VI: vi://user%40domain.local@vcenter-01.domain.local:443/Datacenter/host/Cluster

Disk progress: 1%
Disk progress: 2%
Disk progress: 3%
[... shortened ...]
Disk progress: 98%
Disk progress: 99%
Transfer Completed                    
Completed successfully

The machine is then present in the remote environment and I can boot it up successfully.

However, with a packer-1.3.x built virtual machine, the upload process appears to be much faster, it completes right away. When attempting to boot up the virtual machine, it just attempts a network boot and goes on with displaying: "Operating System not found".

I compared the VMX file of a packer-1.2.x built VM with one created using packer-1.3.x and noticed that the latter one contains the following parameter: bios.hddorder = "scsi0:0"

After removing this parameter, everything was working as expected again for the packer-1.3.x built VM.

I can work around this by specifying the following in my packer template, leading to the parameter being removed after build:

"vmx_data_post": {
   "bios.hddorder": ""
}

Checking some of the packer code/history, I can see that the parameter has been added to the vmx template not too long ago in context of #6197 and https://github.com/hashicorp/packer/pull/6204

Any idea why this is causing ovftool to behave strangely and not upload the disk successfully? The resulting disk in my datastore is about 36MB while the original size was 1.5GB. No error thrown by ovftool though.

Thanks!

jcsmith commented 6 years ago

This also appears to be causing an issue when attempting to use a vmware template created using the vsphere and vsphere-template post processors.

SwampDragons commented 6 years ago

How strange. @arizvisa do you have any idea what could be causing this?

arizvisa commented 6 years ago

in terms of the upload performance for ovftool, not a clue. i literally thought that it was just doing a POST with its data to upload the files...

i did a (very) quick search through ovftool's docs and didn't find any references to the boot order exactly. however, i ran strings on ovftool.exe and grepped it for bios.bootorder...and it looks like there's actually a reference to it. I can't imagine what it would do different based on this though...I can pull it into a disassembler to try and reverse what it's actually doing in a bit.

is this specific to just ovftool, like ovftool isn't uploading the artifacts completely despite it claiming that it does? Or, is packer incorrectly specifying a bios boot order of scsi when the drive is ide/sata/nvme/something-else and that's what is causing the issue?

jcsmith commented 6 years ago

I don’t believe that there is better performance per se but that not everything is being uploaded. I can verify this tomorrow morning.

arizvisa commented 6 years ago

@jcsmith, cool. would appreciate it as i'm just a contributor not really a full-time dever or anything.

also to clarify, the disk that's being created for the .vmx is definitely scsi, right? so bootorder isn't completely wrong?

my methodology for narrowing down this problem is pretty much going to be:

  1. distinguish whether the artifacts are being uploaded correctly 2a. if they're not, figure out how "bios.bootorder" affects ovftool's upload (through reversing), and then patch code (if necessary) 2b. if they are, verify that bios.bootorder is actually correct or not
  2. if bootorder is incorrect, then go into developer mode, and figure out some elegant way to get this option's syntax right for both vmware-* _and_ ovftool

(omfg, markdown is just soo stupid. that numbering should be "1", "2a", "2b", then "3".)

mattandes commented 6 years ago

@arizvisa I am experiencing the exact behavior that the @as-dg is experiencing. When packer uses the ovftool to upload to vSphere it appears that it uploads the VM fine albeit extremely fast but when you go to view the files on the datastore you'll find that the size of the VMDK file is basically 0. Setting bios.hddorder to be an empty string via the vmx_data_post setting appears to work around the issue.

arizvisa commented 6 years ago

ok. from a few minutes of reversing it yesterday, it looks like ovftool is only passing that parameter via soap (https://www.vmware.com/support/developer/converter-sdk/conv61_apireference/vim.vm.BootOptions.html) So, it's not doing anything special short of building the request and sending it.

Something that might be helpful is to see the result of that particular SOAP request (non-encrypted, or if it's encrypted because of SSL, i'd need the private key to decrypt). Specifically VirtualMachineBootOptions. Maybe another way could be to verify in the management configuration for the VM whether the boot order was actually set or not. Since I don't have an ESX instance available to me, someone else would have to do this.

Although this pretty much means I'm debugging ovftool's interaction with esx since the only thing that packer's developers can do is really revert my patch that was suggested by the original reporter (which means his bug comes back), or not (which means this issue with ovftool still happens without the workaround).

arizvisa commented 6 years ago

oh wait, it looks like ovftool has some debugging options: https://www.virtuallyghetto.com/2013/08/quick-tip-useful-ovftool-debugging.html

$ ovftool --help debug

One of these has a logfile, that'll probably log all the requests that ovftool is making.

When one of you guys uses ovftool to upload it to ESX, can you pass the following options to make a verbose log file? I imagine ovftool's developers have enough info in the tools' logs to help troubleshoot this.

$ ovftool -X:logLevel=verbose -X:logFile=/path/to/output/file ...

SwampDragons commented 6 years ago

@as-dg is bootOrder set in your vmx file?

as-dg commented 6 years ago

Yes, I believe the bootOrder parameter was set as well. I was looking at both bootOrder and bios.hddOrder and found the latter to be causing this issue. I had also run ovftool in debug mode initially, but nothing useful (at least in my view) turned out of it. However, I can run through this today again and provide the output, maybe I missed something.

@arizvisa I believe there's some confusion between bios.bootorder and the actual problematic bios.hddorder

Let me come back to you in a few hours with debug output.

as-dg commented 6 years ago

@arizvisa Alright, here is the log output of ovftool. I hope there is something useful in there. Note that I have anonymized all kinds of id's, tokens, thumbprints etc. upload.log

yves-vogl commented 5 years ago

I can confirm this issue and the workaround provided bei @as-dg. If there anything I can add for help, please drop me a line.

yves-vogl commented 5 years ago

Here's a log from using chained post-processors for vSphere templates.

Here's the config.

  "post-processors" : [
    {
      "type" : "vsphere",
      "host" : "{{user `vsphere_host`}}",
      "username" : "{{user `vsphere_username`}}",
      "password" : "{{user `vsphere_password`}}",

      "datacenter" : "{{user `vsphere_datacenter`}}",
      "cluster" : "{{user `vsphere_cluster`}}",
      "datastore" : "{{user `vsphere_datastore`}}",

      "vm_name" : "{{user `name`}}",
      "disk_mode" : "{{user `vsphere_disk_mode`}}",
      "insecure" : "{{user `vsphere_insecure`}}",
      "resource_pool" : "{{user `vsphere_resource_pool`}}",
      "vm_folder" : "{{user `vsphere_vm_folder`}}",
      "vm_network" : "{{user `vsphere_vm_network`}}",

      "overwrite" : true
    },

    {
      "type" : "vsphere-template",
      "host" : "{{user `vsphere_host`}}",
      "username" : "{{user `vsphere_username`}}",
      "password" : "{{user `vsphere_password`}}",
      "datacenter" : "{{user `vsphere_datacenter`}}",
      "folder" : "/{{user `vsphere_vm_folder`}}",
      "insecure" : "{{user `vsphere_insecure`}}",

      "keep_input_artifact": true
    }
  ]

2018/11/09 12:52:48 packer: 2018/11/09 12:52:48 Executing: /Applications/VMware Fusion.app/Contents/Library/vmware-vdiskmanager -d builds/native/atomic-host-7-2018-11-09.vmware/disk.vproject
  Defragment: 100%!d(MISSING)one.11/09 12:52:54 stdout: Defragment: 0%!d(MISSING)one.
2018/11/09 12:52:54 packer: Defragmentation completed successfully.
2018/11/09 12:52:54 packer: 2018/11/09 12:52:54 stderr:
2018/11/09 12:52:54 packer: 2018/11/09 12:52:54 Executing: /Applications/VMware Fusion.app/Contents/Library/vmware-vdiskmanager -k builds/native/atomic-host-7-2018-11-09.vmware/disk.vproject
  Shrink: 100%!d(MISSING)one.018/11/09 12:52:59 stdout: Shrink: 0%!d(MISSING)one.
2018/11/09 12:52:59 packer: Shrink completed successfully.
2018/11/09 12:52:59 packer: 2018/11/09 12:52:59 stderr:
2018/11/09 12:52:59 packer: 2018/11/09 12:52:59 Setting VMX: 'bios.hddorder' = ''
2018/11/09 12:52:59 packer: 2018/11/09 12:52:59 Writing VMX to: builds/native/atomic-host-7-2018-11-09.vmware/atomic-host-7.vmx
==> vmware-iso: Cleaning VMX prior to finishing up...
    vmware-iso: Unmounting floppy from VMX...
2018/11/09 12:52:59 packer: 2018/11/09 12:52:59 Deleting key: floppy0.present
    vmware-iso: Detaching ISO from CD-ROM device...
    vmware-iso: Disabling VNC server...
2018/11/09 12:52:59 packer: 2018/11/09 12:52:59 Writing VMX to: builds/native/atomic-host-7-2018-11-09.vmware/atomic-host-7.vmx
==> vmware-iso: Skipping export of virtual machine (export is allowed only for ESXi)...
2018/11/09 12:52:59 packer: 2018/11/09 12:52:59 Executing: /Applications/VMware Fusion.app/Contents/Library/vmrun -T fusion list
2018/11/09 12:53:00 packer: 2018/11/09 12:53:00 stdout: Total running VMs: 0
2018/11/09 12:53:00 packer: 2018/11/09 12:53:00 stderr:
2018/11/09 12:53:00 [INFO] (telemetry) ending vmware-iso
2018/11/09 12:53:00 [INFO] (telemetry) Starting post-processor vsphere
==> vmware-iso: Running post-processor: vsphere
    vmware-iso (vsphere): Uploading builds/native/atomic-host-7-2018-11-09.vmware/atomic-host-7.vmx to vSphere
2018/11/09 12:53:00 packer: 2018/11/09 12:53:00 Starting ovftool with parameters: --acceptAllEulas --name=atomic-host-7 --datastore=storage --noSSLVerify=true --diskMode=thin --vmFolder=proj/RP/Templates --network=example --overwrite builds/native/atomic-host-7-2018-11-09.vmware/atomic-host-7.vmx vi://vc-project-platform:<password>@vcenter.example.com/dtm-dc01/host/dtm-dc01-proj/Resources/RP
    vmware-iso (vsphere):
2018/11/09 12:53:42 [INFO] (telemetry) ending vsphere
2018/11/09 12:53:42 [INFO] (telemetry) Starting post-processor vsphere-template
==> vmware-iso: Running post-processor: vsphere-template
2018/11/09 12:53:42 [INFO] (telemetry) ending vsphere-template
2018/11/09 12:53:42 Deleting original artifact for build 'vmware-iso'
2018/11/09 12:53:42 ui error: Build 'vmware-iso' errored: 1 error(s) occurred:

* Post-processor failed: The Packer vSphere Template post-processor can only take an artifact from the VMware-iso builder, built on ESXi (i.e. remote) or an artifact from the vSphere post-processor. Artifact type mitchellh.vmware does not fit this requirement
2018/11/09 12:53:42 Builds completed. Waiting on interrupt barrier...
2018/11/09 12:53:42 machine readable: error-count []string{"1"}
2018/11/09 12:53:42 ui error: 
==> Some builds didn't complete successfully and had errors:
2018/11/09 12:53:42 machine readable: vmware-iso,error []string{"1 error(s) occurred:\n\n* Post-processor failed: The Packer vSphere Template post-processor can only take an artifact from the VMware-iso builder, built on ESXi (i.e. remote) or an artifact from the vSphere post-processor. Artifact type mitchellh.vmware does not fit this requirement"}
2018/11/09 12:53:42 ui error: --> vmware-iso: 1 error(s) occurred:

* Post-processor fBuild 'vmware-iso' errored: 1 error(s) occurred:
ailed: The Packer vSphere Template post-processor can only take an artifact from the VMware-iso builder, built on ESXi (i.e. remote) or an artifact from the vSphere post-processor. Artifact type mitchellh.vmware does not fit this requirement
==> Builds finished but no artifacts were created.

2018/11/09 12:53:42 [INFO] (telemetry) Finalizing.
* Post-processor failed: The Packer vSphere Template post-processor can only take an artifact from the VMware-iso builder, built on ESXi (i.e. remote) or an artifact from the vSphere post-processor. Artifact type mitchellh.vmware does not fit this requirement

==> Some builds didn't complete successfully and had errors:
--> vmware-iso: 1 error(s) occurred:

* Post-processor failed: The Packer vSphere Template post-processor can only take an artifact from the VMware-iso builder, built on ESXi (i.e. remote) or an artifact from the vSphere post-processor. Artifact type mitchellh.vmware does not fit this requirement

==> Builds finished but no artifacts were created.
2018/11/09 12:53:43 waiting for all plugin processes to complete...
2018/11/09 12:53:43 /usr/local/bin/packer: plugin process exited
2018/11/09 12:53:43 /usr/local/bin/packer: plugin process exited
2018/11/09 12:53:43 /usr/local/bin/packer: plugin process exited
pgrinstead1 commented 5 years ago

Bump. Did we ever get a fix for this or just the work around only? I have also ran into the same issue with packer 1.3.2.

SwampDragons commented 5 years ago

Just the workaround for now; I've not had a chance to deeply investigate this.

ghost commented 5 years ago

I've just come across the same problem, spent quite a while trying to figure it out... It'd be great to find a proper fix

arizvisa commented 5 years ago

Essentially the problem was narrowed down to not being at all related to Packer, but rather in ESX (or Ovftool) when using Ovftool to upload a .vmx with a particular option.

This happens only when the bios.hddorder parameter is in your .vmx. The setting of this option was introduced into packer due to a user having an issue with their VM booting up off of the correct hard disk (rather than the cdrom device).

This thread resolved it to being 100% related to Ovftool and as such was able to reproduce it outside of packer. So, because of this I'd pretty much consider it an issue for VMware/Ovftool and worth contacting them about.

The only "fix" that Packer can do (in the meantime) is to avoid assigning "bios.hddorder" entirely, which then introduces the other issue of the VM not choosing the correct hard disk to boot up off of when uploading. So that means the decision is mutually exclusive..

So again, it'd be worthwhile for somebody with a support contract to contact VMware and say something like "I can't use ovftool to upload this particular VM", and then send them the .vmx to see why a particular .vmx option (that's in their docs) results in Ovftool sending out a soap request that returns a 200 but doesn't actually upload anything.

paullschock commented 5 years ago

@arizvisa -- Agree with your interpretation and approach. I am going to reach out to vmware support for assistance with this issue on my end and report back any useful info they may provide (if any)

I have been struggling with this bug for 7 days and had narrowed it to ovftool issues on friday, but had not yet gotten to vmx adjustments to fix the issue and happened upon this issue this evening.

Interestingly, I am not using the same vmx_data option ID'd as the cause here (bios.hddorder); meaning there may be a number (< 1) of options that cause this behavior. Nevermind, I see this now in the VMX... sorry about that, I'll still do the case with vmware :)

Relevant snippets from my template:

         "vmx_data": {
          "annotation":  "Plan: {{ user `Plan`}} - #{{ user `Build` }}, Build Timestamp: {{ user `Timestamp` }}, Build Agent: {{ user `Builder` }}",
          "RemoteDisplay.vnc.enabled": "false",
          "RemoteDisplay.vnc.port": "5900",
          "memsize": "{{user `memory_size`}}",
          "numvcpus": "{{user `cpus`}}",
          "scsi0.virtualDev": "lsisas1068",
          "ethernet0.virtualDev": "vmxnet3",
          "vcpu.hotadd": "TRUE",
          "mem.hotadd": "TRUE",
          "virtualHW.version": "11"
        } 
    "post-processors": [
        [ {
        "type": "vsphere",
        "host": "{{ user `vcenter_host` }}",
        "insecure": true,
        "datacenter": "{{ user `vcenter_datacenter` }}",
        "datastore": "{{ user `datastore` }}",
        "disk_mode": "{{ user `disk_mode` }}",
        "cluster": "{{ user `cluster` }}",
        "username": "{{ user `vcenter_user` }}",
        "password": "{{ user `vcenter_pw` }}",
        "vm_name" : "packer-win2012r2-datacenter",
        "vm_folder" : "{{ user `vm_folder` }}",
        "vm_network" : "{{ user `vm_network` }}",
        "overwrite": true
    },

I cant easily paste in the packer debug log, but suffice to say from the above, the default ovftool arguments are passed and the vsphere post-processor takes roughly 5s for what should be a ~60GB upload. I will report back once I've identified which of the offending vmx_data params is causing this behavior as well.

SwampDragons commented 5 years ago

I've actually reached out to VMWare via HashiCorp's Partner Alliances team; I'll let you know if/when I get an update.

arizvisa commented 5 years ago

@paullschock, just a heads up:

You can use gist.github.com to paste large files and such, but also some stuff in your vmx_data can use options available in the vmware builders:

In the next release of packer, you'll also be able to set "memsize" and "numvcpus" with "memory" and "cpus" (respectively). If you feel things like "vcpu.hotadd" and "mem.hotadd" should be an option as well, definitely let any of us know with an issue on those.

Just saying this now since there's been some churn in packer's repo related to warning users against the usage of vmx_data.

paullschock commented 5 years ago

@SwampDragons Thank you, I will hold-off on my effort then. If needed, I am happy to pull whatever info you may find useful.

@arizvisa Thank you, I will open an issue re: 'hotadd' options as we've found that setting useful in some narrow operational cases or incident response efforts.

jamestelfer commented 5 years ago

I have been experiencing the same issue, with a slight twist: we use ovftool to convert the result to an OVF, so there's a slightly different trigger. Uploading this manifests the same issue.

The offending segment is converted to the following in the OVF:

    <vmw:BootOrderSection vmw:instanceId="6" vmw:type="disk">
      <Info>Virtual hardware device boot order</Info>
    </vmw:BootOrderSection>

Others have experienced this issue, working around it by removing the segment from the OVF file, but this is sub-optimal.

The bios.hddorder workaround above means that the generated OVF no longer has this snippet. Hopefully posting this here makes this issue easier to find for others.

@SwampDragons thanks for reaching out to the VMWare team, that's fantastic!

SwampDragons commented 5 years ago

@arizvisa This is probably unrelated to this particular issue because for this to be a regression the people reporting this issue aren't setting disk adapter and are defaulting to lsilogic/scsi disks.

BUT: looking at the code versus the documentation, I'm confused about the adapter types we're using.

We use disk adapter type in the '-a' command when creating disks. looking at docs for desktop: -a [ide|buslogic|lsilogic] and docs for esxi: -a --adaptertype [buslogic|lsilogic|ide|lsisas|pvscsi]

But in our code, we check for "scsi", "nvme", "sata", and "ide". The default behavior should be the same but I'm confused about why the only overlap I'm seeing here is for "ide"

SwampDragons commented 5 years ago

I'm thinking it may make sense to revert #6204 until this is resolved; from what I can tell, it's causing problems for more users than the old behavior was.

arizvisa commented 5 years ago

@SwampDragons, confused about the adapter types? or the bus types, rather?

Options like "scsi", "nvme", "sata", "ide" are all bus types (or vmware calls them disk types). Whereas "ide", "buslogic", "lsilogic", "lsisas", "pvscsi" are all adapter types and really more like protocols. The "IDE" adapter type is probably in all actuality ATA (or something), and they might've just slipped up on the name.

Did that help? or did I miss your question..

arizvisa commented 5 years ago

Sure, I'm totally fine with reverting that patch. It was essentially just an implementation of the original reporter's solution to the problem. Despite me being the committer, it's really his patch tbh.

Oh one thing we'll need to retain from it is the bugfix in step_clean_vmx.go, I'll create a PR for that fix right now for whenever you revert #6204.

arizvisa commented 5 years ago

@SwampDragons, PR #7066 gets rid of that regex hack that I mentioned by adding a list that the builder can add temporary devices to in order to remove them properly during step_clean_vmx.go. So when you revert #6204, the temporary cdrom and floppy devices that are temporary added during build are hopefully removed properly during clean regardless of their type.

As mentioned in the PR, it has support for other device types too but as of now only the "cdrom" and "floppy" devices are added.

SwampDragons commented 5 years ago

I'm looking here: https://github.com/hashicorp/packer/blob/master/builder/vmware/iso/step_create_vmx.go#L186-L216

We're doing a switch statement using DiskAdapterType but comparing it to values of bus types. Rather than comparing them to the actual options, "ide", "buslogic", and "lsilogic". Why would the DiskAdapterType ever be one of those bus types in the way this code is executed?

SwampDragons commented 5 years ago

Okay, the code that caused this issue is reverted. I'm going to close this and make a new issue to reconcile this problem with #6197 moving forward. We need to find something that will work for everyone.

arizvisa commented 5 years ago

@SwampDragons, Ok. I see what you're saying.

So this might be weird naming perhaps due to the way we've been talking about it lately, but specifying the "disk adapter type" is actually only applicable towards the SCSI device type. SCSI devices are the only disk types that support different APIs to access them. (Way back when, every hard disk manufacturer implement SCSI in their own particular ways which is why you needed SCSI drivers in some occasions). However all of the other bus types, (IDE, SATA, and NVME), have their own standard protocol/api (IDE uses ATAPI, etc) to access them.

So with regards to the DiskAdapterType as a Packer configuration, a user can specify any of these standard disk adapter types. However if an unknown one is specified, then it's assumed that the user is specifying the SCSI protocol (or really the SCSI disk controller type). This is because "scsi" is the only type in VMware that actually supports customizing the disk controller type.

It's prudent to note that if "scsi" is specified as the disk adapter type, the logic defaults to "lsilogic" as the controller type (similar to VMware's defaults). So in all actuality the "scsi" option is really more like an alias to "lsilogic". This is why the "scsi" case appears so minimalistic before it falls through to the default case. In the logic, all of this SCSI stuff is handled by that default case and so if the user specifies anything other than "ide", "nvme", or "sata" then we assume that the user is specifying a SCSI bus with a specific SCSI protocol/disk-controller.

So in other words, DiskAdapterType is actually used to specify the bus type. If anything else is specified, then it's assumed that the user is specifying "scsi" with that particular disk controller since "scsi" is the only one that VMware needs to know the controller type for.

arizvisa commented 5 years ago

Sorry for the long read, hopefully it makes sense.

SwampDragons commented 5 years ago

I see, so it's a naming issue; I was just finding it confusing that we are using the name DiskAdapterType when we're using it in a way that isn't directly related to the-a disk adapter type option in vmware.

SwampDragons commented 5 years ago

Got a response from VMWare:

The team believes they may have already fixed this bug in their 4.3 release, available at https://my.vmware.com/web/vmware/details?downloadGroup=OVFTOOL430&productId=742
Can you try the updated version and see how it goes? If not I will connect you with them directly.
SwampDragons commented 5 years ago

cc @jcsmith @jamestelfer @pgrinstead1 @paullschock ☝️ Are any of you able to confirm that your issues are resolved with ovftool 4.3?

arizvisa commented 5 years ago

Ah sweet. So they recognized it and fixed it. Thanks for following up on that @SwampDragons.

jamestelfer commented 5 years ago

@SwampDragons thanks for the follow up!

That's one of the versions of ovftool that I'm using, and it displays the issue. I'm also using a later version (4.3.0 Update1) and it has the issue too.

SwampDragons commented 5 years ago

Bummer. I'll let them know.

SwampDragons commented 5 years ago

Turns out that the update with the ovftool bugfix only works if you're also on esxi 6.7, which I assume isn't the case here.

paullschock commented 5 years ago

@SwampDragons confirmed same as @jamestelfer -- I tried v 4.1 - 4.3 without success. Talking to an ESXi 6 u3 backend (vcenter 6.0).

jamestelfer commented 5 years ago

I'm using 5.5 and 6.

deepak-c commented 5 years ago

@SwampDragons Can confirm that ovftool 4.3 (Running VMware Fusion 11.0 on OSX) has the same issue. Contacted VMware support and they acknowledged the issue and a ticket is open with their product team.

SwampDragons commented 5 years ago

Okay, so this does sound like the issue is that none of us are on the version of esxi (6.7) that was released with ovftool 4.3. The folks at VMWare said you need to be using 6.7 for this bugfix to work.

Obviously that's not going to work for our users, so we have reverted the change that broke things and we'll have to figure out a different solution for the issue we were trying to fix with that change. Can y'all confirm that your uploads are working again with Packer v1.3.3?

jcsmith commented 5 years ago

Uploading to ESXi6.5 with Packer v1.3.3 still seemed to present the same issues.

arizvisa commented 5 years ago

This sounds like it's an issue with ESX and not with ovftool.

This would make sense because when I reversed that component ovftool, it wasn't doing anything special but building a soap request with the information of this option in the .vmx. So it's not like ovftool was doing any processing or whatever.

Maybe it should be asked if they'll backport the fix?

(edited to add information about my analysis of ovftool).

SwampDragons commented 5 years ago

@jcsmith what version of Packer does still work for you? I'm starting to wonder if your issue is different.

jcsmith commented 5 years ago

1.2.5 definitely works. But I’m pretty sure anything <1.3.0 works.

Thanks,

Josh Smith

Sent from my iPhone.

On Dec 12, 2018, at 7:03 PM, Megan Marsh notifications@github.com wrote:

@jcsmith what version of Packer does still work for you? I'm starting to wonder if your issue is different.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

SwampDragons commented 5 years ago

Oh! I thought the revert had made it into the 1.3.3 release but it didn't. Here are some builds of a patch (#7108) that should actually fix this, and if it does work I'll schedule it for v1.3.4.

windows: packer.zip

osx: packer.zip

linux: packer.zip

jcsmith commented 5 years ago

Testing with the OSX build linked to above now. I'll provide an update as soon as the build finishes.

jcsmith commented 5 years ago

These builds function as expected.

SwampDragons commented 5 years ago

Okay, thanks. I'll merge this revert so that the patch is in v1.3.4. It'll be released in ~6 weeks