Closed NYCJacob closed 6 years ago
@NYCJacob Are you on a laptop, not running on AC power? I found (see #9150) that there's a scheduled task that gets created with the default condition to only run if on AC power. You might peek at that.
@JeffMelton yes I am on a laptop, it is plugged in though
Have you by any chance tried with
...
config.vm.provider "vmware_workstation" do |vb|
# Display the VirtualBox GUI when booting the machine
vb.gui = true
...
}
...
??? I ran into this issue as well using vmware workstation 14 and vagrant. if I start the exact same image using vb.gui = true it works, if not(or with vb.gui = false) it fails. Furthermore, with vb.gui = false it starts working, when I manually start the VMWare Workstation GUI. Consistently, when I stop the GUI, destroy and up again with vb.gui = false, it fails. It is failing timeouting in the loop that tries to get the boxes IP address:
INFO vmware_driver: vmrun getGuestIPAddress failed: VMRunError
INFO vmware_driver: Reading VMX data...
DEBUG vmware_driver: - .encoding = UTF-8
DEBUG vmware_driver: - bios.bootorder = hdd,CDROM
DEBUG vmware_driver: - checkpoint.vmstate =
DEBUG vmware_driver: - cleanshutdown = FALSE
DEBUG vmware_driver: - config.version = 8
DEBUG vmware_driver: - displayname = temp: default
DEBUG vmware_driver: - ehci.pcislotnumber = -1
DEBUG vmware_driver: - ehci.present = FALSE
DEBUG vmware_driver: - ethernet0.addresstype = generated
DEBUG vmware_driver: - ethernet0.connectiontype = nat
DEBUG vmware_driver: - ethernet0.present = TRUE
DEBUG vmware_driver: - ethernet0.virtualdev = e1000
DEBUG vmware_driver: - extendedconfigfile = ubuntu-xenial-vmware-fusion.vmxf
DEBUG vmware_driver: - filesearchpath = .;/home/richard/.vagrant.d/boxes/test_box_vmware_workstation14/0/vmware_desktop
DEBUG vmware_driver: - floppy0.present = FALSE
DEBUG vmware_driver: - guestos = ubuntu-64
DEBUG vmware_driver: - gui.fullscreenatpoweron = FALSE
DEBUG vmware_driver: - gui.viewmodeatpoweron = windowed
DEBUG vmware_driver: - hgfs.linkrootshare = TRUE
DEBUG vmware_driver: - hgfs.maprootshare = TRUE
DEBUG vmware_driver: - hpet0.present = TRUE
DEBUG vmware_driver: - ide1:0.clientdevice = TRUE
DEBUG vmware_driver: - ide1:0.devicetype = cdrom-raw
DEBUG vmware_driver: - ide1:0.filename = auto detect
DEBUG vmware_driver: - ide1:0.present = TRUE
DEBUG vmware_driver: - isolation.tools.copy.disable = TRUE
DEBUG vmware_driver: - isolation.tools.dnd.disable = TRUE
DEBUG vmware_driver: - isolation.tools.hgfs.disable = FALSE
DEBUG vmware_driver: - isolation.tools.paste.disable = TRUE
DEBUG vmware_driver: - mem.hotadd = FALSE
DEBUG vmware_driver: - memsize = 512
DEBUG vmware_driver: - mks.enable3d = FALSE
DEBUG vmware_driver: - monitor.phys_bits_used = 42
DEBUG vmware_driver: - msg.autoanswer = true
DEBUG vmware_driver: - numa.autosize.cookie = 20001
DEBUG vmware_driver: - numa.autosize.vcpu.maxpervirtualnode = 2
DEBUG vmware_driver: - numvcpus = 2
DEBUG vmware_driver: - nvram = ubuntu-xenial-vmware-fusion.nvram
DEBUG vmware_driver: - pcibridge0.pcislotnumber = 17
DEBUG vmware_driver: - pcibridge0.present = TRUE
DEBUG vmware_driver: - pcibridge4.functions = 8
DEBUG vmware_driver: - pcibridge4.pcislotnumber = 21
DEBUG vmware_driver: - pcibridge4.present = TRUE
DEBUG vmware_driver: - pcibridge4.virtualdev = pcieRootPort
DEBUG vmware_driver: - pcibridge5.functions = 8
DEBUG vmware_driver: - pcibridge5.pcislotnumber = 22
DEBUG vmware_driver: - pcibridge5.present = TRUE
DEBUG vmware_driver: - pcibridge5.virtualdev = pcieRootPort
DEBUG vmware_driver: - pcibridge6.functions = 8
DEBUG vmware_driver: - pcibridge6.pcislotnumber = 23
DEBUG vmware_driver: - pcibridge6.present = TRUE
DEBUG vmware_driver: - pcibridge6.virtualdev = pcieRootPort
DEBUG vmware_driver: - pcibridge7.functions = 8
DEBUG vmware_driver: - pcibridge7.pcislotnumber = 24
DEBUG vmware_driver: - pcibridge7.present = TRUE
DEBUG vmware_driver: - pcibridge7.virtualdev = pcieRootPort
DEBUG vmware_driver: - policy.vm.mvmtid =
DEBUG vmware_driver: - powertype.poweroff = soft
DEBUG vmware_driver: - powertype.poweron = soft
DEBUG vmware_driver: - powertype.reset = soft
DEBUG vmware_driver: - powertype.suspend = soft
DEBUG vmware_driver: - proxyapps.publishtohost = FALSE
DEBUG vmware_driver: - remotedisplay.vnc.enabled = FALSE
DEBUG vmware_driver: - remotedisplay.vnc.ip = 127.0.0.1
DEBUG vmware_driver: - remotedisplay.vnc.key = BioPEwcnCD4nCwYpMzI9MCUJKicTPgEwFRkqBhY4ODMXMxMnDhgMAxsTORQNDQwkExYkEi0tJA82IjUgKS0vCjILGjI6FxYoISsIHzMVLgkpIQEoNjwqKT8lKSoUGDYEGyYsKRUYGAcaFh0BBT4xIxYOMTMNJhE1Mg0EAzcjOxQ=
DEBUG vmware_driver: - remotedisplay.vnc.password = bIeSYHUc
DEBUG vmware_driver: - remotedisplay.vnc.port = 5952
DEBUG vmware_driver: - replay.filename =
DEBUG vmware_driver: - replay.supported = FALSE
DEBUG vmware_driver: - scsi0.pcislotnumber = 16
DEBUG vmware_driver: - scsi0.present = TRUE
DEBUG vmware_driver: - scsi0.virtualdev = lsilogic
DEBUG vmware_driver: - scsi0:0.filename = ubuntu-xenial-vmware-fusion-cl2.vmdk
DEBUG vmware_driver: - scsi0:0.present = TRUE
DEBUG vmware_driver: - scsi0:0.redo =
DEBUG vmware_driver: - serial0.present = FALSE
DEBUG vmware_driver: - sharedfolder.maxnum = 0
DEBUG vmware_driver: - softpoweroff = FALSE
DEBUG vmware_driver: - sound.present = FALSE
DEBUG vmware_driver: - sound.startconnected = FALSE
DEBUG vmware_driver: - tools.synctime = TRUE
DEBUG vmware_driver: - tools.upgrade.policy = upgradeAtPowerCycle
DEBUG vmware_driver: - usb.pcislotnumber = -1
DEBUG vmware_driver: - usb.present = FALSE
DEBUG vmware_driver: - usb.vbluetooth.startconnected = FALSE
DEBUG vmware_driver: - uuid.action = create
DEBUG vmware_driver: - uuid.bios = 56 4d 0a 55 eb 5f 84 cf-5e 06 b1 44 b8 dc 04 a5
DEBUG vmware_driver: - uuid.location = 56 4d 0a 55 eb 5f 84 cf-5e 06 b1 44 b8 dc 04 a5
DEBUG vmware_driver: - vc.uuid =
DEBUG vmware_driver: - vcpu.hotadd = FALSE
DEBUG vmware_driver: - vhv.enable = FALSE
DEBUG vmware_driver: - virtualhw.productcompatibility = hosted
DEBUG vmware_driver: - virtualhw.version = 12
DEBUG vmware_driver: - vmci0.id = 1861462627
DEBUG vmware_driver: - vmci0.pcislotnumber = 35
DEBUG vmware_driver: - vmci0.present = TRUE
DEBUG vmware_driver: - vmotion.checkpointfbsize = 33554432
DEBUG vmware_driver: - vmotion.checkpointsvgaprimarysize = 33554432
DEBUG vmware_driver: - vpmc.enable = FALSE
DEBUG vmware_driver: - migrate.hostlog = ./ubuntu-xenial-vmware-fusion-fa6b41c6.hlog
DEBUG vmware_driver: - ethernet0.pcislotnumber = 32
DEBUG vmware_driver: - ethernet0.generatedaddress = 00:0c:29:dc:04:a5
DEBUG vmware_driver: - ethernet0.generatedaddressoffset = 0
DEBUG vmware_driver: Trying to get MAC address for ethernet0
DEBUG vmware_driver: No explicitly set MAC, looking or auto-generated one...
DEBUG vmware_driver: -- MAC: 00:0c:29:dc:04:a5
INFO vmware_driver: Reading DHCP lease for '00:0c:29:dc:04:a5' on 'vmnet8'
INFO vmware_driver: DHCP leases file: /etc/vmware/vmnet8/dhcpd/dhcpd.leases
INFO dhcp_lease_file: Initialized DHCP helper: /etc/vmware/vmnet8/dhcpd/dhcpd.leases
INFO dhcp_lease_file: Looking for IP for MAC: 00:0c:29:dc:04:a5
INFO dhcp_lease_file: - IP:
INFO subprocess: Starting process: ["/usr/bin/vmrun", "list"]
INFO subprocess: Command not in installer, restoring original environment...
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: Total running VMs: 1
/home/richard/projects/envimate/ubuntu-xenial/temp/.vagrant/machines/default/vmware_workstation/6b936908-d436-42e2-9551-16a91aaedcdc/ubuntu-xenial-vmware-fusion.vmx
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 0
DEBUG vmware: VM state requested. Current state: running
INFO subprocess: Starting process: ["/usr/bin/vmrun", "list"]
INFO subprocess: Command not in installer, restoring original environment...
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: Total running VMs: 1
/home/richard/projects/envimate/ubuntu-xenial/temp/.vagrant/machines/default/vmware_workstation/6b936908-d436-42e2-9551-16a91aaedcdc/ubuntu-xenial-vmware-fusion.vmx
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 0
DEBUG vmware: VM state requested. Current state: running
INFO subprocess: Starting process: ["/usr/bin/vmrun", "list"]
INFO subprocess: Command not in installer, restoring original environment...
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: Total running VMs: 1
/home/richard/projects/envimate/ubuntu-xenial/temp/.vagrant/machines/default/vmware_workstation/6b936908-d436-42e2-9551-16a91aaedcdc/ubuntu-xenial-vmware-fusion.vmx
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 31999
DEBUG subprocess: Exit status: 0
DEBUG vmware: VM state requested. Current state: running
INFO subprocess: Starting process: ["/usr/bin/vmrun", "list"]
INFO subprocess: Command not in installer, restoring original environment...
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: Total running VMs: 1
/home/richard/projects/envimate/ubuntu-xenial/temp/.vagrant/machines/default/vmware_workstation/6b936908-d436-42e2-9551-16a91aaedcdc/ubuntu-xenial-vmware-fusion.vmx
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 0
DEBUG vmware: VM state requested. Current state: running
INFO subprocess: Starting process: ["/usr/bin/vmrun", "list"]
INFO subprocess: Command not in installer, restoring original environment...
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: Total running VMs: 1
/home/richard/projects/envimate/ubuntu-xenial/temp/.vagrant/machines/default/vmware_workstation/6b936908-d436-42e2-9551-16a91aaedcdc/ubuntu-xenial-vmware-fusion.vmx
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 0
DEBUG vmware: VM state requested. Current state: running
INFO subprocess: Starting process: ["/usr/bin/vmrun", "list"]
INFO subprocess: Command not in installer, restoring original environment...
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: Total running VMs: 1
/home/richard/projects/envimate/ubuntu-xenial/temp/.vagrant/machines/default/vmware_workstation/6b936908-d436-42e2-9551-16a91aaedcdc/ubuntu-xenial-vmware-fusion.vmx
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 0
DEBUG vmware: VM state requested. Current state: running
INFO subprocess: Starting process: ["/usr/bin/vmrun", "list"]
INFO subprocess: Command not in installer, restoring original environment...
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: Total running VMs: 1
/home/richard/projects/envimate/ubuntu-xenial/temp/.vagrant/machines/default/vmware_workstation/6b936908-d436-42e2-9551-16a91aaedcdc/ubuntu-xenial-vmware-fusion.vmx
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 31999
DEBUG subprocess: Exit status: 0
DEBUG vmware: VM state requested. Current state: running
INFO subprocess: Starting process: ["/usr/bin/vmrun", "list"]
INFO subprocess: Command not in installer, restoring original environment...
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: Total running VMs: 1
/home/richard/projects/envimate/ubuntu-xenial/temp/.vagrant/machines/default/vmware_workstation/6b936908-d436-42e2-9551-16a91aaedcdc/ubuntu-xenial-vmware-fusion.vmx
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 0
DEBUG vmware: VM state requested. Current state: running
INFO vmware_driver: Reading an accessible IP for machine...
INFO vmware_driver: Trying vmrun getGuestIPAddress...
INFO subprocess: Starting process: ["/usr/bin/vmrun", "getGuestIPAddress", "/home/richard/projects/envimate/ubuntu-xenial/temp/.vagrant/machines/default/vmware_workstation/6b936908-d436-42e2-9551-16a91aaedcdc/ubuntu-xenial-vmware-fusion.vmx"]
INFO subprocess: Command not in installer, restoring original environment...
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: Error: The VMware Tools are not running in the virtual machine: /home/richard/projects/envimate/ubuntu-xenial/temp/.vagrant/machines/default/vmware_workstation/6b936908-d436-42e2-9551-16a91aaedcdc/ubuntu-xenial-vmware-fusion.vmx
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 255
INFO vmware_driver: vmrun getGuestIPAddress failed: VMRunError
The box I'm building has open-vm-tools installed.
Since my build server has no UI running, that is not an option, any help appreciated, Richard
@staenker didn't try that one and ended up getting a returning both the plugin and vm14 but I'll try that with trial versions.
Using a little bit more of digging and google I found the solution to the problem mentioned in this thread over 1 year ago: https://communities.vmware.com/thread/537961
jenkins@jenkins:/var/persistent/jenkins_data/workspace/xenial_ubuntu-xenial_master-HRNDJPBP2XHBRVVWCNEBK7XZA4A3C54O4LIEJTTTICTPNBM46I6Q/temp$ vmrun getGuestIPAddress /var/persistent/jenkins_data/workspace/xenial_ubuntu-xenial_master-HRNDJPBP2XHBRVVWCNEBK7XZA4A3C54O4LIEJTTTICTPNBM46I6Q/temp/.vagrant/machines/default/vmware_workstation/cf230381-2e2e-4aaa-9bba-8cb31e671f23/master_ubuntu-xenial-vmware-fusion.vmx
Error: The VMware Tools are not running in the virtual machine: /var/persistent/jenkins_data/workspace/xenial_ubuntu-xenial_master-HRNDJPBP2XHBRVVWCNEBK7XZA4A3C54O4LIEJTTTICTPNBM46I6Q/temp/.vagrant/machines/default/vmware_workstation/cf230381-2e2e-4aaa-9bba-8cb31e671f23/master_ubuntu-xenial-vmware-fusion.vmx
jenkins@jenkins:/var/persistent/jenkins_data/workspace/xenial_ubuntu-xenial_master-HRNDJPBP2XHBRVVWCNEBK7XZA4A3C54O4LIEJTTTICTPNBM46I6Q/temp$ vmrun getGuestIPAddress /var/persistent/jenkins_data/workspace/xenial_ubuntu-xenial_master-HRNDJPBP2XHBRVVWCNEBK7XZA4A3C54O4LIEJTTTICTPNBM46I6Q/temp/.vagrant/machines/default/vmware_workstation/cf230381-2e2e-4aaa-9bba-8cb31e671f23/master_ubuntu-xenial-vmware-fusion.vmx -wait
172.16.163.128
Notice the -wait switch at the end of the command. Now that the QA and Research has been done for hashicorp, can you please fix the vmware_workstation provider for linux so I can start using, what I already paid for?
I'm also seeing this same issue. Oddly enough, I didn't start experiencing it until after upgrading to Workstation 14, finding that the plugin didn't support 14 yet, and downgrading back to Workstation 12.
Symptom are identical with the vmrun message in debug output. Running with "gui" enabled seems to work around the race condition.
Running on Mint 18.
I wonder if trying to use HashiCorp's tools in a business setting is a mistake.
I bought the VMWare provider (and VMWare Fusion) assuming that commercial software wouldn't get in the way of being productive, but here we are. This isn't the first problem I've run into either.
Anyone feel like reporting on their related experiences?
@peterlindstrom234 I'm using VMware Fusion Pro, Vagrant and the Vagrant Fusion plugin for three years now. Only issue is with VMware Fusion + APFS + growing disks with some Windows VM's right now.
This issue is about Ubuntu host + VMware Workstation.
HashiCorp has to maintain three VMware platforms: Windows + Workstation, Linux + Workstation, macOS + Fusion and I appretiate their work in keeping it up and running. New VMware version and/or host OS version may interfere with all the tricky parts. So check new versions on a test machine (or nested VM) before updating your host is what I've learned over the years.
We went through some trouble on Windows host + domain users + VMware Workstation 14, but this is fixed for me and me colleages. I feel satisfied with the support as I know some envs are difficult to reproduce a bug.
@StefanScherer well, I seem to have the exact same problem with a macOS host and VMWare Fusion.
It's also worth noting that my Vagrantfile is almost empty (just like in the ticket description), so it's not like we're trying to do something exotic. Quite the contrary, it seems like anyone would run into this problem right away.
So how much testing are they doing?
New VMware version and/or host OS version may interfere with all the tricky parts.
For example? :)
I feel satisfied with the support as I know some envs are difficult to reproduce a bug.
What kind of support have you received? So far, I've seen HashiCorp be strangely silent about strangely long-standing issues with Packer / Vagrant.
@peterlindstrom234 The product I bought from HashiCorp is the vagrant plugin. I can use vagrant without any Desktop environment. So I expect their plugin to work in the same setting. Of course, technically, the bug is a vmware bug. And since the plugin depends on it, it's not the plugins fault. That being said, I expect a vendor of a plugin that provides integration between 2 products to do regression testing on new version releases of both of them. That is obviously not done.
I wonder if trying to use HashiCorp's tools in a business setting is a mistake.
After the issue is open for a month even though it's really max a 2h job to fix the problem, I do think using these products in a business setting is a mistake - imagine there would be customer waiting for a product release that is blocked by that bug. Especially since it is closed source and I can't just get into the code and fix the problem myself - It is really just adding a -wait parameter...
Alright, I managed to create a fixing workaround that should do the trick for everyone running on a nix system. I went for a wrapper bash script, that appends the -wait parameter for get Installing the fix is pretty straight forward and the installation should be performed as root.
jenkins@jenkins:~$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
jenkins@jenkins:~$ which vmrun
/usr/bin/vmrun
I chose /usr/local/bin since it is before /usr/bin where the vmrun command is located and remember the absolute path of the vmrun command - in my case /usr/bin/vmrun.
touch /usr/local/bin/vmrun
#!/bin/bash
containsElement () { local e match="$1" shift for e; do [[ "$e" == "$match" ]] && return 0; done return 1 }
if containsElement "getGuestIPAddress" "$@"; then exec /usr/bin/vmrun "$@" -wait else exec /usr/bin/vmrun "$@" fi
4. Change permissions of the wrapper script
```bash
chmod 0755 /usr/local/bin/vmrun
jenkins@jenkins:~$ which vmrun
/usr/local/bin/vmrun
I tested it on ubuntu xenial only, it might be that the script needs some polish for OSX, maybe @peterlindstrom234 can you give it a try?
There is one problem with this solution though: it only works reliably if your vm has 1 network adapter. With multiple network adapters vagrant will debug something like:
INFO vmware_driver: Reading an accessible IP for machine...
INFO vmware_driver: Trying vmrun getGuestIPAddress...
INFO subprocess: Starting process: ["/usr/local/bin/vmrun", "getGuestIPAddress", "/home/richard/projects/envimate/ubuntu-xenial/test2/.vagrant/machines/default/vmware_workstation/3057b3c2-ee4f-416b-96fa-599492cbe354/vagrant-ready-docker-vmware-fusion.vmx"]
INFO subprocess: Command not in installer, restoring original environment...
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: 172.17.0.1
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 31999
DEBUG subprocess: Exit status: 0
WARN vmware_driver: vmrun getGuestIPAddress returned: 172.17.0.1. Result resembles address retrieval from wrong interface. Discarding value and proceeding with VMX based lookup.
And then it actually determines the right ip address:
INFO vmware_driver: Reading an accessible IP for machine...
INFO vmware_driver: Skipping vmrun getGuestIPAddress as requested by config.
INFO vmware_driver: Reading VMX data...
DEBUG vmware_driver: - .encoding = UTF-8
...
DEBUG vmware_driver: - ethernet0.pcislotnumber = 32
DEBUG vmware_driver: - ethernet0.generatedaddress = 00:0c:29:12:27:06
DEBUG vmware_driver: - ethernet0.generatedaddressoffset = 0
DEBUG vmware_driver: Trying to get MAC address for ethernet0
DEBUG vmware_driver: No explicitly set MAC, looking or auto-generated one...
DEBUG vmware_driver: -- MAC: 00:0c:29:12:27:06
INFO vmware_driver: Reading DHCP lease for '00:0c:29:12:27:06' on 'vmnet8'
INFO vmware_driver: DHCP leases file: /etc/vmware/vmnet8/dhcpd/dhcpd.leases
INFO dhcp_lease_file: Initialized DHCP helper: /etc/vmware/vmnet8/dhcpd/dhcpd.leases
INFO dhcp_lease_file: Looking for IP for MAC: 00:0c:29:12:27:06
INFO dhcp_lease_file: - IP: 192.168.170.128
that works, because the content of /etc/vmware/vmnet8/dhcpd/dhcpd.leases
at this point in time looks something like this:
# All times in this file are in UTC (GMT), not your local timezone. This is
# not a bug, so please don't ask about it. There is no portable way to
# store leases in the local timezone, so please don't request this as a
# feature. If this is inconvenient or confusing to you, we sincerely
# apologize. Seriously, though - don't ask.
# The format of this file is documented in the dhcpd.leases(5) manual page.
lease 192.168.170.128 {
starts 5 2017/12/08 14:18:46;
ends 5 2017/12/08 14:48:46;
hardware ethernet 00:0c:29:12:27:06;
client-hostname "base-debootstrap";
}
That looks like a normal lease file - ignoring the awesome header :)
Next: Very happy about that vagrant goes on and configures the vmware desktop networking:
INFO warden: Calling IN action: VMware Middleware: ForwardPorts
DEBUG forward_ports: Building up ports to forward...
DEBUG nat_conf: Read section: incomingtcp
DEBUG nat_conf: -- Value: "[incomingtcp]\n\n# Use these with care - anyone can enter into your VM through these...\n# The format and example are as follows:\n#<external port number> = <VM's IP address>:<VM's port number>\n#8080 = 172.16.3.128:80\n"
DEBUG nat_conf: Read section: incomingudp
DEBUG nat_conf: -- Value: "[incomingudp]\n\n# UDP port forwarding example\n#6000 = 172.16.3.0:6001\n\n"
DEBUG nat_conf: Read section: incomingtcp
DEBUG nat_conf: -- Value: "[incomingtcp]\n\n# Use these with care - anyone can enter into your VM through these...\n# The format and example are as follows:\n#<external port number> = <VM's IP address>:<VM's port number>\n#8080 = 172.16.3.128:80\n"
DEBUG nat_conf: Read section: incomingudp
DEBUG nat_conf: -- Value: "[incomingudp]\n\n# UDP port forwarding example\n#6000 = 172.16.3.0:6001\n\n"
INFO vmware_driver: Reading an accessible IP for machine...
INFO vmware_driver: Skipping vmrun getGuestIPAddress as requested by config.
INFO vmware_driver: Reading VMX data...
DEBUG vmware_driver: - .encoding = UTF-8
...
DEBUG vmware_driver: - ethernet0.pcislotnumber = 32
DEBUG vmware_driver: - ethernet0.generatedaddress = 00:0c:29:12:27:06
DEBUG vmware_driver: - ethernet0.generatedaddressoffset = 0
DEBUG vmware_driver: Trying to get MAC address for ethernet0
DEBUG vmware_driver: No explicitly set MAC, looking or auto-generated one...
DEBUG vmware_driver: -- MAC: 00:0c:29:12:27:06
INFO vmware_driver: Reading DHCP lease for '00:0c:29:12:27:06' on 'vmnet8'
INFO vmware_driver: DHCP leases file: /etc/vmware/vmnet8/dhcpd/dhcpd.leases
INFO dhcp_lease_file: Initialized DHCP helper: /etc/vmware/vmnet8/dhcpd/dhcpd.leases
INFO dhcp_lease_file: Looking for IP for MAC: 00:0c:29:12:27:06
INFO dhcp_lease_file: - IP: 192.168.170.128
INFO interface: info: Forwarding ports...
INFO interface: info: ==> default: Forwarding ports...
==> default: Forwarding ports...
INFO interface: detail: -- 22 => 2222
INFO interface: detail: default: -- 22 => 2222
default: -- 22 => 2222
INFO networking_file: Reading adapters from networking file...
DEBUG networking_file: VNET: 1. KEY: 'DHCP' = 'yes'
DEBUG networking_file: VNET: 1. KEY: 'DHCP_CFG_HASH' = '03B6B10F3B775C4E1F52001CD36EA67632A582DF'
DEBUG networking_file: VNET: 1. KEY: 'HOSTONLY_NETMASK' = '255.255.255.0'
DEBUG networking_file: VNET: 1. KEY: 'HOSTONLY_SUBNET' = '192.168.107.0'
DEBUG networking_file: VNET: 1. KEY: 'VIRTUAL_ADAPTER' = 'yes'
DEBUG networking_file: VNET: 8. KEY: 'DHCP' = 'yes'
DEBUG networking_file: VNET: 8. KEY: 'DHCP_CFG_HASH' = '071E917A7A0A9CF003BEA76D2D941EC1897E8B42'
DEBUG networking_file: VNET: 8. KEY: 'HOSTONLY_NETMASK' = '255.255.255.0'
DEBUG networking_file: VNET: 8. KEY: 'HOSTONLY_SUBNET' = '192.168.170.0'
DEBUG networking_file: VNET: 8. KEY: 'NAT' = 'yes'
DEBUG networking_file: VNET: 8. KEY: 'VIRTUAL_ADAPTER' = 'yes'
DEBUG networking_file: Pruning adapters that aren't actually active...
INFO vmware_driver: Setting up forwarded ports in: /etc/vmware/vmnet8/nat/nat.conf
DEBUG nat_conf: Read section: incomingtcp
DEBUG nat_conf: -- Value: "[incomingtcp]\n\n# Use these with care - anyone can enter into your VM through these...\n# The format and example are as follows:\n#<external port number> = <VM's IP address>:<VM's port number>\n#8080 = 172.16.3.128:80\n"
DEBUG nat_conf: Read section: incomingudp
DEBUG nat_conf: -- Value: "[incomingudp]\n\n# UDP port forwarding example\n#6000 = 172.16.3.0:6001\n\n"
INFO subprocess: Starting process: ["/usr/local/bin/vmrun", "list"]
INFO subprocess: Command not in installer, restoring original environment...
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: Total running VMs: 1
/home/richard/projects/envimate/ubuntu-xenial/test2/.vagrant/machines/default/vmware_workstation/00a4a03f-bcce-4821-9bc5-a539c4ba5952/vagrant-ready-docker-vmware-fusion.vmx
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 0
DEBUG internal_configuration: Clearing VM section: /home/richard/projects/envimate/ubuntu-xenial/test2/.vagrant/machines/default/vmware_workstation/00a4a03f-bcce-4821-9bc5-a539c4ba5952/vagrant-ready-docker-vmware-fusion.vmx
DEBUG internal_configuration: Entry: {"port_forwards"=>{"tcp"=>{"2222"=>22}, "udp"=>{}}, "guest_ip"=>"192.168.170.128"}
DEBUG internal_configuration: Saving updated internal configuration!
DEBUG nat_conf: Replace section: incomingtcp
DEBUG nat_conf: Replace section: incomingudp
DEBUG nat_conf: Writing NAT file: "# VMware NAT configuration file\n# Manual editing of this file is not recommended. Using UI is preferred.\n\n[host]\n\n# NAT gateway address\nip = 192.168.170.2\nnetmask = 255.255.255.0\n\n# VMnet device if not specified on command line\ndevice = /dev/vmnet8\n\n# Allow PORT/EPRT FTP commands (they need incoming TCP stream ...)\nactiveFTP = 1\n\n# Allows the source to have any OUI. Turn this on if you change the OUI\n# in the MAC address of your virtual machines.\nallowAnyOUI = 1\n\n# Controls if (TCP) connections should be reset when the adapter they are\n# bound to goes down\nresetConnectionOnLinkDown = 1\n\n# Controls if (TCP) connection should be reset when guest packet's destination\n# is NAT's IP address\nresetConnectionOnDestLocalHost = 1\n\n# Controls if enable nat ipv6\nnatIp6Enable = 0\n\n# Controls if enable nat ipv6\nnatIp6Prefix = fd15:4ba5:5a2b:1008::/64\n\n[tcp]\n\n# Value of timeout in TCP TIME_WAIT state, in seconds\ntimeWaitTimeout = 30\n\n[udp]\n\n# Timeout in seconds. Dynamically-created UDP mappings will purged if\n# idle for this duration of time 0 = no timeout, default = 60; real\n# value might be up to 100% longer\ntimeout = 60\n\n[netbios]\n# Timeout for NBNS queries.\nnbnsTimeout = 2\n\n# Number of retries for each NBNS query.\nnbnsRetries = 3\n\n# Timeout for NBDS queries.\nnbdsTimeout = 3\n\n[incomingtcp]\n\n# Use these with care - anyone can enter into your VM through these...\n# The format and example are as follows:\n#<external port number> = <VM's IP address>:<VM's port number>\n#8080 = 172.16.3.128:80\n# VAGRANT-BEGIN: /home/richard/projects/envimate/ubuntu-xenial/test2/.vagrant/machines/default/vmware_workstation/00a4a03f-bcce-4821-9bc5-a539c4ba5952/vagrant-ready-docker-vmware-fusion.vmx\n2222 = 192.168.170.128:22\n# VAGRANT-END: /home/richard/projects/envimate/ubuntu-xenial/test2/.vagrant/machines/default/vmware_workstation/00a4a03f-bcce-4821-9bc5-a539c4ba5952/vagrant-ready-docker-vmware-fusion.vmx\n\n[incomingudp]\n\n# UDP port forwarding example\n#6000 = 172.16.3.0:6001\n\n"
INFO subprocess: Starting process: ["/home/richard/.vagrant.d/gems/2.4.2/gems/vagrant-vmware-workstation-5.0.4/bin/vagrant_vmware_desktop_sudo_helper_linux_amd64", "netconf", "-nat=/tmp/vagrant20171208-24476-c38w5k", "-device=vmnet8"]
DEBUG subprocess: Selecting on IO
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 0
DEBUG nat_conf: Read section: incomingtcp
DEBUG nat_conf: -- Value: "[incomingtcp]\n\n# Use these with care - anyone can enter into your VM through these...\n# The format and example are as follows:\n#<external port number> = <VM's IP address>:<VM's port number>\n#8080 = 172.16.3.128:80\n# VAGRANT-BEGIN: /home/richard/projects/envimate/ubuntu-xenial/test2/.vagrant/machines/default/vmware_workstation/00a4a03f-bcce-4821-9bc5-a539c4ba5952/vagrant-ready-docker-vmware-fusion.vmx\n2222 = 192.168.170.128:22\n# VAGRANT-END: /home/richard/projects/envimate/ubuntu-xenial/test2/.vagrant/machines/default/vmware_workstation/00a4a03f-bcce-4821-9bc5-a539c4ba5952/vagrant-ready-docker-vmware-fusion.vmx\n"
DEBUG nat_conf: Read section: incomingudp
DEBUG nat_conf: -- Value: "[incomingudp]\n\n# UDP port forwarding example\n#6000 = 172.16.3.0:6001\n\n"
DEBUG networking_file: saving device: {:name=>"vmnet1", :number=>1, :dhcp=>"yes", :hostonly_netmask=>"255.255.255.0", :hostonly_subnet=>"192.168.107.0", :nat=>nil, :virtual_adapter=>"yes"}
DEBUG networking_file: saving device: {:name=>"vmnet8", :number=>8, :dhcp=>"yes", :hostonly_netmask=>"255.255.255.0", :hostonly_subnet=>"192.168.170.0", :nat=>"yes", :virtual_adapter=>"yes"}
DEBUG networking_file: saving port forward set: {:device=>"8", :proto=>:tcp, :host_port=>2222, :guest_ip=>"192.168.170.128", :guest_port=>22}
INFO subprocess: Starting process: ["/home/richard/.vagrant.d/gems/2.4.2/gems/vagrant-vmware-workstation-5.0.4/bin/vagrant_vmware_desktop_sudo_helper_linux_amd64", "netconf", "-network=/tmp/vagrant20171208-24476-ov3iqs"]
DEBUG subprocess: Selecting on IO
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 0
INFO subprocess: Starting process: ["/home/richard/.vagrant.d/gems/2.4.2/gems/vagrant-vmware-workstation-5.0.4/bin/vagrant_vmware_desktop_sudo_helper_linux_amd64", "vmnet", "-status"]
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: Bridge networking on vmnet0 is running
DHCP service on vmnet1 is running
Hostonly virtual adapter on vmnet1 is enabled
DHCP service on vmnet8 is running
NAT service on vmnet8 is running
Hostonly virtual adapter on vmnet8 is enabled
All the services configured on all the networks are running
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 0
INFO vmware_driver: Stopping VMware network interfaces...
INFO subprocess: Starting process: ["/home/richard/.vagrant.d/gems/2.4.2/gems/vagrant-vmware-workstation-5.0.4/bin/vagrant_vmware_desktop_sudo_helper_linux_amd64", "vmnet", "-stop"]
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: Stopped Bridged networking on vmnet0
Stopped DHCP service on vmnet1
Disabled hostonly virtual adapter on vmnet1
Stopped DHCP service on vmnet8
Stopped NAT service on vmnet8
Disabled hostonly virtual adapter on vmnet8
Stopped all configured services on all networks
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 31999
DEBUG subprocess: Exit status: 0
INFO vmware_driver: Reconfiguring VMware network interfaces...
INFO subprocess: Starting process: ["/home/richard/.vagrant.d/gems/2.4.2/gems/vagrant-vmware-workstation-5.0.4/bin/vagrant_vmware_desktop_sudo_helper_linux_amd64", "vmnet", "-migrate=/tmp/vagrant-vmware-network-settings20171208-24476-tyq2bc"]
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: Stopped all configured services on all networks
Restored network settings
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 0
INFO vmware_driver: Starting VMware network interfaces...
INFO subprocess: Starting process: ["/home/richard/.vagrant.d/gems/2.4.2/gems/vagrant-vmware-workstation-5.0.4/bin/vagrant_vmware_desktop_sudo_helper_linux_amd64", "vmnet", "-start"]
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: Started Bridge networking on vmnet0
Enabled hostonly virtual adapter on vmnet1
Started DHCP service on vmnet1
Started NAT service on vmnet8
Enabled hostonly virtual adapter on vmnet8
Started DHCP service on vmnet8
Started all configured services on all networks
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 31999
DEBUG subprocess: Exit status: 0
INFO warden: Calling IN action: #<Vagrant::Action::Builtin::WaitForCommunicator:0x00000000028123e0>
INFO interface: output: Waiting for machine to boot. This may take a few minutes...
INFO subprocess: Starting process: ["/usr/local/bin/vmrun", "list"]
INFO interface: output: ==> default: Waiting for machine to boot. This may take a few minutes...
INFO subprocess: Command not in installer, restoring original environment...
==> default: Waiting for machine to boot. This may take a few minutes...
Sadly, Waiting for machine to boot. This may take a few minutes...
is a lie - it'll wait until it timeouts. That is because the series of commands:
INFO subprocess: Starting process: ["/home/richard/.vagrant.d/gems/2.4.2/gems/vagrant-vmware-workstation-5.0.4/bin/vagrant_vmware_desktop_sudo_helper_linux_amd64", "vmnet", "-stop"]
...
INFO subprocess: Starting process: ["/home/richard/.vagrant.d/gems/2.4.2/gems/vagrant-vmware-workstation-5.0.4/bin/vagrant_vmware_desktop_sudo_helper_linux_amd64", "vmnet", "-migrate=/tmp/vagrant-vmware-network-settings20171208-24476-tyq2bc"]
...
INFO subprocess: Starting process: ["/home/richard/.vagrant.d/gems/2.4.2/gems/vagrant-vmware-workstation-5.0.4/bin/vagrant_vmware_desktop_sudo_helper_linux_amd64", "vmnet", "-start"]
leaves the content of /etc/vmware/vmnet8/dhcpd/dhcpd.leases
without any lease entry:
# All times in this file are in UTC (GMT), not your local timezone. This is
# not a bug, so please don't ask about it. There is no portable way to
# store leases in the local timezone, so please don't request this as a
# feature. If this is inconvenient or confusing to you, we sincerely
# apologize. Seriously, though - don't ask.
# The format of this file is documented in the dhcpd.leases(5) manual page.
so of course, vagrant debugs the following loop until it dies:
DEBUG subprocess: Selecting on IO
INFO subprocess: Starting process: ["/usr/local/bin/vmrun", "list"]
INFO subprocess: Command not in installer, restoring original environment...
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: Total running VMs: 1
/home/richard/projects/envimate/ubuntu-xenial/test2/.vagrant/machines/default/vmware_workstation/00a4a03f-bcce-4821-9bc5-a539c4ba5952/vagrant-ready-docker-vmware-fusion.vmx
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 0
DEBUG vmware: VM state requested. Current state: running
DEBUG subprocess: stdout: Total running VMs: 1
/home/richard/projects/envimate/ubuntu-xenial/test2/.vagrant/machines/default/vmware_workstation/00a4a03f-bcce-4821-9bc5-a539c4ba5952/vagrant-ready-docker-vmware-fusion.vmx
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 0
DEBUG vmware: VM state requested. Current state: running
INFO vmware_driver: Reading an accessible IP for machine...
INFO vmware_driver: Skipping vmrun getGuestIPAddress as requested by config.
INFO vmware_driver: Reading VMX data...
DEBUG vmware_driver: - .encoding = UTF-8
...
DEBUG vmware_driver: - ethernet0.pcislotnumber = 32
DEBUG vmware_driver: - ethernet0.generatedaddress = 00:0c:29:12:27:06
DEBUG vmware_driver: - ethernet0.generatedaddressoffset = 0
DEBUG vmware_driver: Trying to get MAC address for ethernet0
DEBUG vmware_driver: No explicitly set MAC, looking or auto-generated one...
DEBUG vmware_driver: -- MAC: 00:0c:29:12:27:06
INFO vmware_driver: Reading DHCP lease for '00:0c:29:12:27:06' on 'vmnet8'
INFO vmware_driver: DHCP leases file: /etc/vmware/vmnet8/dhcpd/dhcpd.leases
INFO dhcp_lease_file: Initialized DHCP helper: /etc/vmware/vmnet8/dhcpd/dhcpd.leases
INFO dhcp_lease_file: Looking for IP for MAC: 00:0c:29:12:27:06
INFO dhcp_lease_file: - IP:
That is a problem I find hard to quickfix and at this point I'll just give up on vagrant + vmware. Vagrant simply does not support vmware workstation 14 - even with GUI mode enabled. That investigation took me some hours now that I should have spent on trying out https://github.com/vagrant-libvirt/vagrant-libvirt, at least it looks like they are regression testing. So long...
I will say I was VMWare support was pretty good in responding although we never solved the issue. They also did not agree that the plugin and vmware are made by the same company. Basically they said the problem is the plugin not supporting vmware14. This is ridiculous to me and as my 30 day support was coming to an end I returned both the plugin and vmware.
@staenker nice work with the debugging and work-around there!
That being said, I expect a vendor of a plugin that provides integration between 2 products to do regression testing on new version releases of both of them. That is obviously not done.
Yeah, it seems we paid $79 (was it?) for a commercial product that the company selling it doesn't bother testing at all, pretty much. It's kind of amazing.
After the issue is open for a month even though it's really max a 2h job to fix the problem, I do think using these products in a business setting is a mistake - imagine there would be customer waiting for a product release that is blocked by that bug.
To be fair, the research might have taken a day or two, or something?
But I guess a month passing without Hashi doing or saying anything about this just confirms their complete lack of fucks given about the Vagrant VMWare plugin (and their customers).
I wonder if I should ask/demand my money back. It probably wouldn't be worth the trouble. -Maybe that's their business model? Charge for "nothing" and hope people just let it be? :P
so of course, vagrant debugs the following loop until it dies
Wow. I ran into this kind of stuff too! Any build I run with Packer and VMWare seems to require manually shutting down the VM (and VMWare after it), so that Packer gets out of a loop and completes the build.
Yet another red flag there.. and I've seen a couple of other GitHub issues that also constitute red flags.
That is a problem I find hard to quickfix and at this point I'll just give up on vagrant + vmware.
Yeah, I'm moving on too. I suppose what's going on with Packer and Vagrant can be taken to mean that none of Hashi's other tools would be worth using either.
Giving a fuck would show across all of their products.
@NYCJacob
Basically they said the problem is the plugin not supporting vmware14. This is ridiculous to me
So did you think VMWare was wrong, or maybe even lying about the problem being with Hashi's plugin?
I've more or less given up on the vmware plugin as well at this point. I've started migrating things over to vagrant-lxc.
@peterlindstrom234 I really don't know.
@NYCJacob
Right, I was just wondering about what you considered "ridiculous" :)
@xraj
I've started migrating things over to vagrant-lxc
I'm trying to get started with Guix and GuixSD. It's a bit of a risky endeavour, but the payoff could be great!
@peterlindstrom234 the fact that they didn't seem to understand VMware and vagrant are both made by Hashicorp.
@NYCJacob That is also news to me. I always thought VMWare is nearly as old as @mitchellh and thus cannot be created by him ...?
@NYCJacob confused, or trolling? :p
Hi there,
A new VMware plugin has been released today. You can read more about it here:
https://www.hashicorp.com/blog/introducing-the-vagrant-vmware-desktop-plugin
If you still encounter this issue after upgrading, please open a new issue and I will investigate further.
Cheers!
If you still encounter this issue after upgrading, please open a new issue and I will investigate further.
So did you fix this or not? That sounds like you've just done some stuff that may or may not fix a serious problem with your commercial software offering.
The blog post talks about "critical security vulnerabilities" being fixed. It's concerning that Vagrant (or this plugin) has those too.
But what about critical usability issues that just about anyone will run into right away? Have any of those been fixed?
I have reproduced the environment (ubuntu 16.04) with vmware workstation 14 and do not see the behavior described. If the same behavior is still seen, please open a new issue and I will investigate further to determine what state is different between my reproduction environment and the environment displaying the issue.
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Please note that the Vagrant issue tracker is reserved for bug reports and enhancements. For general usage questions, please use the Vagrant mailing list: https://groups.google.com/forum/#!forum/vagrant-up. Thank you!
Vagrant version
Vagrant 2.0.0
Host operating system
Ubuntu 16.04 LTS
Guest operating system
Ubuntu 12.04 Hashicorpt/Precise64 box
Vagrantfile
Please note, if you are using Homestead or a different Vagrantfile format, we may be unable to assist with your issue. Try to reproduce the issue using a vanilla Vagrantfile first.
Debug output
Provide a link to a GitHub Gist containing the complete debug output: https://www.vagrantup.com/docs/other/debugging.html. The debug output should be very long. Do NOT paste the debug output in the issue, just paste the link to the Gist.
https://gist.github.com/NYCJacob/fb689662e34a945dc744fbd340199596
Expected behavior
What should have happened? vmshould have been created in a VMware vm using the precise64 box
Actual behavior
What actually happened? Vagrant timed out trying to start VMware vm
Steps to reproduce
References
Are there any other GitHub issues (open or closed) that should be linked here?
https://github.com/hashicorp/vagrant/issues/9141