Open jeroenjacobs79 opened 7 years ago
The message is our generic error message, having nothing to do with LVM specifically. The issue here is that it is not recognizing the guest OS for some reason and rejecting it.
Either something is not set correctly in the VM shell or the OS is returning values we are not interpreting correctly.
If you can provide your distro details along with:
I will create an internal PR to investigate the issue. Please rest assured that this likely has nothing to do with your LVM configuration.
cat /etc/issue
\S
Kernel \r on an \m
[centos@localhost ~]$ uname -a
Linux localhost.localdomain 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[centos@localhost ~]$ cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
Truth is, I have another template on CentOS7 where customizations run fine. but there is some version difference on both.
On the broken one:
centos@localhost ~]$ yum list installed | grep open-vm-tools
open-vm-tools.x86_64 10.0.5-4.el7_3 @updates
open-vm-tools-desktop.x86_64 10.0.5-4.el7_3 @updates
On the working one:
[jeroen@app01 ~]$ yum list installed | grep open-vm-tools
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
open-vm-tools.x86_64 10.0.5-2.el7 @base
The only difference is the machine version. The working one is version 13, while the one with the issue is version 12. Can that have anything to do with it?
I did a few more tests, but I still can't explain why it works in one template, and not in the other.
It's not related to the machine version. I converted the non-working template to HW version 13, but still the same result after creating a new VM on that template.
It's not related to the open-vm-tools version either. Even when I downgrade the version in the template to 10.0.5-2.e17, the problem continues.
I'm at a total loss here. The only thing that I can do now, is show you how I build that template, and hope that someone is able to reproduce the problem.
I disabled the LVM stuff again, and went for a traditional disk setup. I'm still getting the same error, despite the fact no LVM is being used.
This is the kickstart file I use:
#version=DEVEL
# System authorization information
auth --enableshadow --passalgo=sha512
# Use network installation media
url --url="http://mirror.centos.org/centos/7/os/x86_64"
# Use text install
text
# Run the Setup Agent on first boot
firstboot --enable
ignoredisk --only-use=sda
# Keyboard layouts
keyboard --vckeymap=be --xlayouts='be'
# System language
lang en_US.UTF-8
# Accept Eula
eula --agreed
#reboot after setup
reboot
# Network information
network --bootproto=dhcp --noipv6 --activate
network --hostname=localhost.localdomain
# Root password
rootpw --iscrypted $6$QYRPcrqL2ZKFABGT$6eT.NBiW/iuyLhdiUjD73FvhjkslSsIQs0uUhwBY1BgJcB0sjblikNmlpTJ/wThP2JGpwuDc/6fy1QVOFUa600
# System services
services --enabled="chronyd"
# System timezone
timezone Europe/Brussels --isUtc
# System bootloader configuration
bootloader --append=" crashkernel=auto" --location=mbr --boot-drive=sda
# Partition clearing information
clearpart --none --initlabel
part /boot --fstype="xfs" --ondisk=sda --size=512
part / --fstype="xfs" --ondisk=sda --size=10240
part /home --fstype="xfs" --ondisk=sda --size=5120
part /var --fstype="xfs" --ondisk=sda --size=15360
part swap --fstype="swap" --ondisk=sda --size=4096
%packages
@core
@platform-vmware --nodefaults
net-tools
nano
deltarpm
wget
bash-completion
yum-plugin-remove-with-leaves
yum-utils
libselinux-python
open-vm-tools
lvm2
perl
chrony
kexec-tools
%end
%addon com_redhat_kdump --enable --reserve-mb='auto'
%end
%anaconda
pwpolicy root --minlen=6 --minquality=50 --notstrict --nochanges --notempty
pwpolicy user --minlen=6 --minquality=50 --notstrict --nochanges --notempty
pwpolicy luks --minlen=6 --minquality=50 --notstrict --nochanges --notempty
%end
After that I run a clean-up script:
# Stop logging services
echo "Stop logging services"
/sbin/service rsyslog stop
/sbin/service auditd stop
# Remove old kernels
echo "Remove old kernels"
package-cleanup -y --oldkernels --count=1
# Clean out yum
echo "Clean out yum"
yum clean all
# Force the logs to rotate & remove old logs we don’t need
echo "clean up logs"
/usr/sbin/logrotate /etc/logrotate.conf --force
rm -f /var/log/*-???????? /var/log/*.gz
rm -f /var/log/dmesg.old
rm -rf /var/log/anaconda
# remove udev persistent rules
echo "cleanup udev rules"
rm -f /etc/udev/rules.d/70*
# Truncate the audit logs (and other logs we want to keep placeholders for)
echo "truncate audit logs"
cat /dev/null > /var/log/audit/audit.log
cat /dev/null > /var/log/wtmp
cat /dev/null > /var/log/lastlog
cat /dev/null > /var/log/grubby
# Remove the traces of the template MAC address and UUIDs
echo "remove MAC address and UUIDs"
sed -i '/^\(HWADDR\|UUID\)=/d' /etc/sysconfig/network-scripts/ifcfg-e*
# enable network interface onboot
echo "Enable network on boot"
sed -i -e 's@^ONBOOT="no@ONBOOT="yes@' /etc/sysconfig/network-scripts/ifcfg-e*
# Clean /tmp out
echo "clean /tmp"
rm -rf /tmp/*
rm -rf /var/tmp/*
# Remove the SSH host keys
echo "clean ssh host keys"
rm -f /etc/ssh/ssh_host_*
# Remove the root user’s SSH history
echo "clean root folder"
rm -rf ~root/.ssh/
rm -f ~root/anaconda-ks.cfg
# disable root login and password
echo "disable root login"
passwd -d root
passwd -l root
# Remove the root user’s shell history and poweroff
echo "clear root history"
history -cw
/sbin/halt -h -p
This machine is created on a Linux Machine (with vmware player) and then converted to vSphere via ovftools. Then, on the vSphere server I upgrade the hardware version to 13 and convert to template. The resulting vmtx file on the esx host looks like this:
.encoding = "UTF-8"
config.version = "8"
virtualHW.version = "13"
nvram = "centos7_base_1.1.nvram"
pciBridge0.present = "TRUE"
svga.present = "TRUE"
pciBridge4.present = "TRUE"
pciBridge4.virtualDev = "pcieRootPort"
pciBridge4.functions = "8"
pciBridge5.present = "TRUE"
pciBridge5.virtualDev = "pcieRootPort"
pciBridge5.functions = "8"
pciBridge6.present = "TRUE"
pciBridge6.virtualDev = "pcieRootPort"
pciBridge6.functions = "8"
pciBridge7.present = "TRUE"
pciBridge7.virtualDev = "pcieRootPort"
pciBridge7.functions = "8"
vmci0.present = "TRUE"
hpet0.present = "TRUE"
floppy0.present = "FALSE"
memSize = "1024"
powerType.suspend = "soft"
tools.upgrade.policy = "upgradeAtPowerCycle"
scsi0.virtualDev = "lsilogic"
scsi0.present = "TRUE"
ide1:0.startConnected = "FALSE"
ide1:0.deviceType = "atapi-cdrom"
ide1:0.fileName = "CD/DVD drive 0"
ide1:0.present = "TRUE"
scsi0:0.deviceType = "scsi-hardDisk"
scsi0:0.fileName = "centos7_base_1.1.vmdk"
scsi0:0.present = "TRUE"
ethernet0.virtualDev = "e1000"
ethernet0.networkName = "VLAN-DEV"
ethernet0.addressType = "vpx"
ethernet0.generatedAddress = "00:50:56:a1:d7:a2"
ethernet0.wakeOnPcktRcv = "FALSE"
ethernet0.present = "TRUE"
displayName = "centos7_base_1.1"
guestOS = "rhel7-64"
toolScripts.afterPowerOn = "TRUE"
toolScripts.afterResume = "TRUE"
toolScripts.beforeSuspend = "TRUE"
toolScripts.beforePowerOff = "TRUE"
tools.syncTime = "FALSE"
uuid.bios = "42 21 e0 a2 37 59 ed 32-69 0a 70 9a c4 4a 55 62"
vc.uuid = "50 21 77 1a 93 83 ba 91-37 ca df 1f a4 b1 de 5e"
migrate.hostLog = "centos7_base_1.1-624cbdd1.hlog"
This is the vmtx file of my WORKING template. This machine was created on the esx host itself, and installed interactively. After that, I ran the cleanup script and converted to template.
.encoding = "UTF-8"
config.version = "8"
virtualHW.version = "13"
nvram = "CENTOS7_INIT.nvram"
pciBridge0.present = "TRUE"
svga.present = "TRUE"
pciBridge4.present = "TRUE"
pciBridge4.virtualDev = "pcieRootPort"
pciBridge4.functions = "8"
pciBridge5.present = "TRUE"
pciBridge5.virtualDev = "pcieRootPort"
pciBridge5.functions = "8"
pciBridge6.present = "TRUE"
pciBridge6.virtualDev = "pcieRootPort"
pciBridge6.functions = "8"
pciBridge7.present = "TRUE"
pciBridge7.virtualDev = "pcieRootPort"
pciBridge7.functions = "8"
vmci0.present = "TRUE"
hpet0.present = "TRUE"
svga.vramSize = "8388608"
numvcpus = "2"
memSize = "2048"
sched.cpu.units = "mhz"
sched.cpu.affinity = "all"
powerType.powerOff = "default"
powerType.suspend = "default"
powerType.reset = "default"
scsi0.virtualDev = "pvscsi"
scsi0.present = "TRUE"
sata0.present = "TRUE"
scsi0:0.deviceType = "scsi-hardDisk"
scsi0:0.fileName = "CENTOS7_INIT.vmdk"
sched.scsi0:0.shares = "normal"
sched.scsi0:0.throughputCap = "off"
scsi0:0.present = "TRUE"
ethernet0.virtualDev = "vmxnet3"
ethernet0.networkName = "VLAN-SERVERS"
ethernet0.addressType = "vpx"
ethernet0.generatedAddress = "00:50:56:a1:c7:bc"
ethernet0.uptCompatibility = "TRUE"
ethernet0.present = "TRUE"
sata0:0.deviceType = "cdrom-image"
sata0:0.fileName = "/vmfs/volumes/101379d7-3853441d/CentOS-7-x86_64-DVD-1511.iso"
sata0:0.present = "TRUE"
floppy0.startConnected = "FALSE"
floppy0.clientDevice = "TRUE"
floppy0.fileName = "vmware-null-remote-floppy"
displayName = "CENTOS7_BASE"
guestOS = "rhel7-64"
toolScripts.afterPowerOn = "TRUE"
toolScripts.afterResume = "TRUE"
toolScripts.beforeSuspend = "TRUE"
toolScripts.beforePowerOff = "TRUE"
uuid.bios = "42 21 c6 94 ab ac 6b 31-f5 6d e7 dd 94 dd 3e 1d"
vc.uuid = "50 21 ef 11 03 01 e3 07-cd 31 62 2d 20 5f 6b 4b"
migrate.hostLog = "CENTOS7_INIT-28698827.hlog"
sched.cpu.min = "0"
sched.cpu.shares = "normal"
sched.mem.min = "0"
sched.mem.minSize = "0"
sched.mem.shares = "normal"
numa.autosize.vcpu.maxPerVirtualNode = "2"
numa.autosize.cookie = "20001"
sched.swap.derivedName = "/vmfs/volumes/b11c6460-b2f803ca/CENTOS7_INIT/CENTOS7_INIT-a3e995db.vswp"
scsi0:0.redo = ""
pciBridge0.pciSlotNumber = "17"
pciBridge4.pciSlotNumber = "21"
pciBridge5.pciSlotNumber = "22"
pciBridge6.pciSlotNumber = "23"
pciBridge7.pciSlotNumber = "24"
scsi0.pciSlotNumber = "160"
ethernet0.pciSlotNumber = "192"
vmci0.pciSlotNumber = "32"
sata0.pciSlotNumber = "33"
scsi0.sasWWID = "50 05 05 64 ab ac 6b 30"
vmci0.id = "-1797439971"
monitor.phys_bits_used = "43"
vmotion.checkpointFBSize = "8388608"
vmotion.checkpointSVGAPrimarySize = "8388608"
cleanShutdown = "TRUE"
softPowerOff = "TRUE"
svga.guestBackedPrimaryAware = "TRUE"
tools.syncTime = "FALSE"
As you can see, there are hardware difference (scsi adapter) due to the fact that one template was created on vmware player, and another on esx directly. Can that explain why customizations fail?
Thanks for the detailed information.
It has been forwarded it to the Guest Customization team to reproduce and diagnose the failure.
I already got a step further. When I convert the template to a machine again, fire it up, and stop it and convert back to template again, the customizations work.
So my guess is, the issue occurs only when you create a vm in WorkStation Player (not tested in Workstation) and use ovftool to transfer it to vSphere. The resulting vmware image is unable to use customizations. Once you fired up the vm at least once one the esx host itself, the problem goes away.
However, this still breaks my workflow, as my intention was to build base images and push them to vSphere automatically.
Interesting. While the initial steps only upgraded the VM hardware to version 13 before making the template, the second scenario involves powering on and shutting the VM down before creating the template. Any obvious differences in the .vmtx files?
In case it is needed to reproduce the situation in house, what versions of VM Player, ovftool are you using?
While it should not make a difference what version & build of ESX is involved?
Thanks
Version info:
I'm using a not-quite-average setup which might make it hard to reproduce the problem. I use Packer to built my machine on Linux (Packer calls VMWare Player and ovftool to provision and deploy the machine to vSphere). I'll try to summarize:
After all of this, when I convert the resulting VM on vSphere to template, I'm receiving that error when I built a new machine based on that VM.
If I just start up the resulting machine on vSphere, run my cleanup script again, turn it off and convert it to template, I can built new machines based on that template.
Changing the HW version doesn't seem to make any difference, as far as I can tell.
So yeah, not your average setup :-)
Customization of the guest operating system 'rhel7_64Guest' is not supported in this
This issue happens when the VC that you use is old and doesn't support the customization for specified guest operating system OR the 'VMware Tools' is not installed inside the guest.
In the client (web client or C# client), before deploying the VM, what does the 'summary' for the template show for 'tools status'. Is it 'not installed' or 'installed'?
Thanks
Guess what, when I use VMWare Workstation (instead of VMPlayer), I don't have the issue.
So I guess it's not an openvm issue, but a VMWare Player issue.
Hi @jeroenjacobs1205 , is this with the same versions for WS/player and ovftool? Can you send the resulting vmx file?
In case it helps.
My experience is that in order to the customisation to work, the VMWare tools must have run during the template creation. I guess it is because of something done during the first execution of VMWare Tools (open-vm-tools in this case) but I haven't gone deeper.
What I do in the Kickstart script of the template creation is launch the processes manually (systemd doesn't work because of chroot) /usr/bin/VGAuthService -b /usr/bin/vmtoolsd -b /tmp/vmtoolsd.pid
Prior to the end of postinstall scripts /usr/bin/VGAuthService -b
(all chroot'ed).
[EDIT] I forgot to mention the environment: RHEL7 over ESX6.
I had the exact same problem and discovered the solution. It seems that Terraform ask vsphere if vmware tools are installed (it doesn't power on the vm and check if tools are installed himself). In your case the VM hasn't been powered on on the vSphere environment (only in the Player) so vSphere hasn't seen any Vmware tools running and reports them as not installed until you power on the vm and the vmware tools service is started.
I spent many time understanding this. Maybe the error message could be more explicit?
I've a similar situation. Create a VM for Debian 9 using packer+vsphere API (ISO+preseed), the VM install works fine, the open-vm-tools package is installed and is recognised by vSphere.
However after stopping the VM, converting to a template and launching a VM from the template customizations are refused (the dreaded generic error message listed at the top of this thread.)
The VM is created and used on ESX 6.5, it has the correct hw version 13, and guest os=Linux/Debian8_64 and Debian9_64 has been tried.
So it does not seem like a mismatch been ESX version and Guestos, or virtual hw version (and such mismatching seems the main reason for issues listed above)
Any tips on how to dig deeper?
Found a fix: use guest_os_type=ubuntu64Guest (even with Debian)
For anyone looking for the correct guest ID for the operating system type. For a full list of possible values, see https://pubs.vmware.com/vsphere-6-5/topic/com.vmware.wssdk.apiref.doc/vim.vm.GuestOsDescriptor.GuestOsIdentifier.html
It seems this is related to the new imported OVA only. I spent days on troubleshooting this. We have ESX6.7U2 and I'm trying to import RHCOS OVA (rhel7/64b). The VM signature guest_os_type is right, all should be supported as per vmware compatibility table. However the vCenter reports noVmwareTools installed. Once the OVA is powered on and off, vCenter recognizes that vmwaretools are installed. Since this the customization works perfectly fine. It is weird to me....I consider this as workaround. Any ideas how to deal with it?
Changing the Guest OS type from Debian10 to Ubuntu Linux also works for me. After the VM starts, the tools seem to recognize that it is in fact Debian, but the VM setting stays as "Ubuntu Linux (64 bit)"
For anyone looking for the correct guest ID for the operating system type. For a full list of possible values, see https://pubs.vmware.com/vsphere-6-5/topic/com.vmware.wssdk.apiref.doc/vim.vm.GuestOsDescriptor.GuestOsIdentifier.html
that link is down. try this one instead: https://vdc-download.vmware.com/vmwb-repository/dcr-public/da47f910-60ac-438b-8b9b-6122f4d14524/16b7274a-bf8b-4b4c-a05e-746f2aa93c8c/doc/vim.vm.GuestOsDescriptor.GuestOsIdentifier.html
For those running into this with Ubuntu / Debian deployments, it may be due to a field missing in the .vmdk
file:
ddb.toolsVersion
It's unclear to me if this was a regression introduced in ESX / vCenter, but the Ubuntu fix was to modify the helper tool in the livecd-rootfs
package that Canonical uses to create Ubuntu vmdks. It now adds this line to the vmdk:
ddb.toolsVersion = "2147483647"
With that line in place, before the first boot vCenter will report:
VMware Tools: Not running, version:2147483647 (Guest Managed)
instead of:
VMware Tools: Not running, not installed
With this Customizations can be applied even without booting the VM first.
After the first boot , ddb.toolsVersion
is automatically updated to the actual version of open-vm-tools installed on the engine, such as:
VMware Tools: Running, version:11269 (Guest Managed)
For more info, see: https://bugs.launchpad.net/ubuntu/+source/open-vm-tools/+bug/1893898
@pzakha, @Boran I've been seeing this problem using Debian Buster (and its open-vm-tools
version 2:10.3.10-1+deb10u2
), and just modified my vm base image to pin open-vm-tools
to buster-backports
and now I'm getting open-vm-tools
version 2:11.2.5-1~bpo10+1
. Now, looking at these guests with updated open-vm-tool
in vCenter, their "VMWare Tools" section now says:
Not running, version:11333 (Guest Managed)
even when the guest is powered off. Before, It only ever showed an indication that VMWare tools were installed when the machine was powered on.
I'm not yet sure if this change will improve my experience making customizations to Debian Buster-based vsphere guests. I'm using ansible community.vmware collection to make changes to guests, and have been having lots of trouble using this collection to create, clone, and make changes to my Debian Buster based guests.
FWIW, upgrading my open-vm-tools didn't improve the situation I have using ansible community.vmware collection to make changes to guests. I ended up having to simplify/limit what I'm doing with that module, and make all my guest OS changes (namely, changing a cloned machine's hostname) using "pure ansible", as described here.
So I have a CentOS7 based VM template, that uses LVM partitioning. I also have some customizations to set the IP address and hostname.
Imagine my surprise when I get this when applying my customization :
I'm not customizing anything disk-related, so I shouldn't get this error.
Is my only solution going back to traditional disk-partioning?