rgl / esxi-vagrant

ESXi running in QEMU/KVM/libvirt/ESXi wrapped in a vagrant environment
16 stars 1 forks source link

MAC Address configuration #1

Open abbbi opened 4 years ago

abbbi commented 4 years ago

moving this over from the vagrant-libvirt discussion:

i have only once attempted to use a packaged up ESXi image on with Terraform+libvirt and i think i experienced the same issue. It seems like the mac address used by the ESXi image is hard-coded somewhere in the ESXi configuration. That also meant i was unable to spin up multiple ESXi images on the same host bcs they battled each other for the DHCP leases.

I was able to solve this issue by resetting the ESXI Configuration via the ADMIN menu as one last manual step during packaging the image with packer. After a shutdown that basically leaves you with an clean ESXI installation which defaults to DHCP without hard-coding the MAC address. As that step needs manual intervention i never released my packer config for it.

I then found out there is some command line tool to do the same which you would have to call using your ks.cfg (probalby by mis-using the firstboot command) or by ssh during provisioning with packer, see:

https://www.vm-help.com/esx40i/esxi-reset-system-configuration

In any case you must be sure not to boot the image again after you have reset the configuration, only that leaves it in an clean state.

Of course any other configuration is then lost too (like for example your SSH configuration for Password auth. By default the Image then has an empty password, which brought me into great troubles moving on with ansible..., which i then used to reconfigure the ESXI instances using during deploy). I have not yet tried if this works with packer in some way, because i dont know if the image is reachable by ssh by default (could probably be set using ks.cfg)

Looking forward to see your images working, because i would have great benefit from it :)

Just as a side note: ESXI checks for CPU compatibility during bootup, it might well be that your vagrant image works on one hardware and doesnt on another. It basically depends on the CPU emulation by libvirt and may well be that the booting image tells the end-user that the cpu is not supported. Without host-passthrough i think it wont work well..

All in all, packing up esxi to run in with libvirt is a real pain in the ass and i still stick to the version i once built.

rgl commented 4 years ago

I finally was able to do it at https://github.com/rgl/esxi-vagrant/commit/545a8c18dad13082a4e80fd114b5a7fc3f8da9ea! :-)

I had to use the local.sh script to change the vmk0 mac address to the vmnic0 address (and restart the vmk0 interface) and it now works.

Two VMs running:

image

Now I have to find a way to make vagrant actually be able to execute scripts in this VM, as its now failing:

==> default: Creating shared folders metadata...
==> default: Starting domain.
==> default: Waiting for domain to get an IP address...
==> default: Waiting for SSH to become available...
==> default: Configuring and enabling network interfaces...
==> default: Running provisioner: shell...
    default: Running: inline script
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.

But that's for another time!

Anyways, feel free to try it :-)

abbbi commented 4 years ago

very nice! the inline script error might well be that esxi comes with busybox which does not include BASH, but vagrant by default wants to execute shell scripts using bash. There is a configuration directive that allows to set the shell used during provision, probably ash is the right one here.

config.ssh.shell = 'ash'

in the vagrantfile might help. Another issue might be that it trys to execute using sudo, which probably doesnt exist either, so the provision script setting in the vagrantfile should use privileged: false, and be rather executed as root directly, if root login is enabled.

rgl commented 4 years ago

The shell was already being set to /bin/sh at Vagrantfile.template.

I also found out about https://github.com/hashicorp/vagrant/tree/master/plugins/guests/esxi and integrated it at https://github.com/rgl/esxi-vagrant/commit/ad7d7739dbeccf26e08113f40dc10c6109bc5c5d. We can now use SSH keys and set the hostname.

In theory we should also be able to add additional networks but I didn't test that yet...

What I did test was using NFS shared folders (config.vm.synced_folder '.', '/vagrant', type: 'nfs') but for some reason vagrant is detecting my host address as 127.0.0.1 and because of that esxi is failing to connect to the host.

Oh, and thanks for all the PR that you've send! This is now working more smoothly!