hashicorp / packer

Packer is a tool for creating identical machine images for multiple platforms from a single source configuration.
http://www.packer.io
Other
15.11k stars 3.33k forks source link

Packer build for hyperv-iso fails with `Waiting for SSH` error. #5049

Closed samirprakash closed 4 years ago

samirprakash commented 7 years ago

BUG:

We are trying to create a Ubuntu Vagrant box using hyperv-iso image type. We are stuck with the error Waiting for SSH to be available. After a few minutes, it times out and the build fails.

mwhooker commented 7 years ago

please add debug log output by running packer with the environment variable PACKER_LOG=1 set

samirprakash commented 7 years ago

packerlog.txt

Attached detailed debug log.

taliesins commented 7 years ago

Are you seeing an ip address assigned to them vm? Is it able to download the speed file or run updates?

If gets an ip address make sure that ssh server is up and running. Check that your user is configured for ssh server. Check that firewall on the vm. Can you ssh to the vm?

Then check firewall on machine running Packer and anywhere between. Windows firewall has blocked access to Packer's http server for me before.

samirprakash commented 7 years ago

IP address is not assigned to the VM. It is not able to run updates. It gets stuck waiting for the SSH connection to happen.

it-praktyk commented 7 years ago

I've found Ubuntu install hangs on Hyper- - it's an old post but maybe...

samirprakash commented 7 years ago

There seems to be no solution to this IMO with the current configuration. The issue is not with packer but rather with the provider implementation using hyper-v.

Is there is a working example somewhere within packer?

whytoe commented 7 years ago

I am having this problem too, OS: Windows 10 Image: Ubuntu 16.04.02 Configuration: Same across both Windows Machines Packer Version: 1.0.2 - Fails 0.12.3 Works

it-praktyk commented 7 years ago

I observe the same symptoms. What data can I gather and deliver to move the issue forward?

ladar commented 7 years ago

I've wrestled with this issue many times. You need to get the Hyper-V plugin running inside the VM during the install process or packer will never detect the IP and thus never connect. It's trickier than it sounds. Especially if you only want to install the Hyper-V plugins when building Hyper-V boxes. I've managed to get it working on Debian, Ubuntu, Alpine, Oracle, CentOS, RHEL, Fedora, Arch, Gentoo and FreeBSD. See here. Which target are you going for?

As an aside, a major hurdle I've been having is the installer finishing, then rebooting, only it doesn't eject the install media, and boots from it again. That issue can also cause the symptom you're seeing. It would be nice if packer setup the machines with the hard disk higher in the boot priority, or it auto detected the reboot and ejected... as I never hit this issue on VMWare, Virtualbox or QEMU.

ladar commented 7 years ago

For Ubuntu (I just noticed the JSON file above), make sure you have these packages being installed via your config:

d-i pkgsel/include string curl openssh-server sudo sed linux-tools-$(uname -r) linux-cloud-tools-$(uname -r) linux-cloud-tools-common

and you probably need to run this command (assuming your trying to login via root to provision the box):

d-i preseed/late_command string                                                   \
        sed -i -e "s/.*PermitRootLogin.*/PermitRootLogin yes/g" /target/etc/ssh/sshd_config
whytoe commented 7 years ago

This is amazing advice, @mwhooker can this be added to the hyperv-iso documentation on packer.io to ensure success with this great tool and relieve frustration :)

rickard-von-essen commented 7 years ago

Especially if you only want to install the Hyper-V plugins when building Hyper-V boxes.

There is a nifty tool for determining what hypervisor you are running on virt-what. Example

You need to get the Hyper-V plugin running inside the VM during the install process or packer will never detect the IP and thus never connect.

If that is the case it should definitely be documented.

it-praktyk commented 7 years ago

Wow, thx for all answers.

There is a nifty tool for determining what hypervisor you are running on virt-what.

https://people.redhat.com/~rjones/virt-what/

ladar commented 7 years ago

I prefer dmidecode, as it uses far fewer dependencies, and is more generally available.

if [[ `dmidecode -s system-product-name` == "VirtualBox" ]]; then
fi
if [[ `dmidecode -s system-manufacturer` == "Microsoft Corporation" ]]; then
fi
if [[ `dmidecode -s system-product-name` == "VMware Virtual Platform" ]]; then
fi
if [[ `dmidecode -s system-product-name` == "KVM" || `dmidecode -s system-manufacturer` == "QEMU" ]]; then
fi

Or for those situations where dmidecode and awk aren't available, such as during an automated install process, all you really need is dmesg and grep. For example, with Debian I use:

d-i preseed/late_command string                                                   \
        sed -i -e "s/.*PermitRootLogin.*/PermitRootLogin yes/g" /target/etc/ssh/sshd_config ; \
        dmesg | grep "Hypervisor detected: Microsoft HyperV" ; \
        if [ $? -eq 0 ]; then \
          chroot /target /bin/bash -c 'service ssh stop ; echo "deb http://deb.debian.org/debian jessie main" >> /etc/apt/sources.list ; apt-get update ; apt-get install hyperv-daemons' ; \
          eject /dev/cdrom ; \
        fi
it-praktyk commented 7 years ago

I try install Ubuntu 16.0.4 using

The gist what contains

The build is based on https://github.com/geerlingguy/packer-ubuntu-1604

The YouTube video

You can see that installation stuck without any information and didn't end correctly.

On the video between about 2'34 and 3'38" was removed waiting time for timeout time (40 minutes in total).

ladar commented 7 years ago

@it-praktyk see my post above regarding an Ubuntu install on Hyper-V. You need to add the following to your pkgsel/include line:

linux-tools-$(uname -r) linux-cloud-tools-$(uname -r) linux-cloud-tools-common

That is the easiest way to get the Hyper-V daemon setup on Ubuntu during the install process, and should solve your problem.

it-praktyk commented 7 years ago

Yes, I didn't mention but I tried it today also.

Do you build images using Windows 10?

ladar commented 7 years ago

Yes.

ladar commented 7 years ago

The hard way to solve this problem is open the virtual machine console using the Hyper-V manager, wait until it reboots and then login via the console. Once there install the Hyper-V daemons manually, and packer should connect via SSH within 1 or 2 minutes. Note, you might need to manually enable the daemons using systemctl (if varies between distros, and I don't know whether they are enabled by default on Ubuntu.

ladar commented 7 years ago

I should add, that if the daemons are running, and you still can't connect, then you need to manually confirm ssh is working properly... so from the console, run ifconfig to determine the IP and see if you can login using the credentials specified in the packer JSON config. It's possible a setting the sshd_config is blocking access. For example password logins may be disabled, or direct root logins may be disabled.

If you can login manually via the credentials in the JSON file, and you can confirmed the Hyper-V daemons are running (KVP and VSS) and packer still isn't connecting let us know.

taliesins commented 7 years ago

I don't think that is a problem specifically related to Hyper-V. We need a topic about how to support OSes that don't have built in drivers/support for the Hypervisor you have selected to use.

I have run into the problem of ejecting the cd rom as well (installing Pfsense). During an installation process there may be multiple reboots (looking at Windows here with patches). The way to tackle that is to eject the cd from the installation process of the OS.

Think of doing something like this:

"<wait10><wait10><wait10><wait10><wait10><wait10><wait10><wait10><wait10><wait10>",
"<wait><leftCtrlOn>c<leftCtrlOff>",
"<wait><enter>",
"<wait>clear<wait><enter>",
"<wait>cdcontrol eject && exit<wait><enter>",

For a real bastard of an install have a look at: https://github.com/taliesins/packer-baseboxes/blob/master/hyperv-pfsense-2.3.2.json

stuartluscombe commented 7 years ago

I was experiencing this same issue when trying to build RHEL 7.3 and Ubuntu.

In my case I found that I first had to ensure an External VM switch was already set up within Hyper-V as packer would only create an internal one. This got Ubuntu working OK, but for RHEL I additionally had to install the Microsoft LIS drivers from https://www.microsoft.com/en-us/download/details.aspx?id=51612 as the built-in ones didn't seem to work.

ladar commented 7 years ago

For RHEL 7.3 you need the following in your Kickstart file:


reboot --eject

%post

# Create the vagrant user account.
/usr/sbin/useradd vagrant
echo "vagrant" | passwd --stdin vagrant

# Make the future vagrant user a sudo master.
sed -i "s/^.*requiretty/#Defaults requiretty/" /etc/sudoers
echo "vagrant        ALL=(ALL)       NOPASSWD: ALL" >> /etc/sudoers.d/vagrant
chmod 0440 /etc/sudoers.d/vagrant

VIRT=`dmesg | grep "Hypervisor detected" | awk -F': ' '{print $2}'`
if [[ $VIRT == "Microsoft HyperV" ]]; then
    mount /dev/cdrom /media
    cp /media/media.repo /etc/yum.repos.d/media.repo
    printf "enabled=1\n" >> /etc/yum.repos.d/media.repo
    printf "baseurl=file:///media/\n" >> /etc/yum.repos.d/media.repo

    yum --assumeyes install eject hyperv-daemons
    systemctl enable hypervkvpd.service
    systemctl enable hypervvssd.service

    rm --force /etc/yum.repos.d/media.repo
    umount /media/
fi

%end
ladar commented 7 years ago

I started watching this thread with the hope packer would get better at detecting Hyper-V guest IP addresses (like it does with other providers), but it appear anybody is working on that, so I'm going to mute this topic. As such,if anybody else needs help getting another packer to work with a different distro, please message me directly.

wickedviking commented 7 years ago

these issues with Hyper-V are specifically related not just to drivers being present, but daemons as well.

https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/supported-linux-and-freebsd-virtual-machines-for-hyper-v-on-windows

Most of the "popular" distro's now include the required drivers, however they do not, by default include the daemons. Instructions for installing and enabling the daemons are documented on the Distribution specific pages linked at the bottom of that doc. Once the daemons are installed and running, they will report their IPs and you can winrm, powershell, or ssh to heart's content.

As implementation is distro specific, I agree this is not a packer problem, but could very well be remedied by updating the hyperv-iso docs to direct users to the MS docs.

wickedviking commented 7 years ago

@ladar you can avoid all the mount madness if you force the network to be available in %post with

network --bootproto=dhcp --onboot=on --device=eth0

for some as yet undermined reason, hyperv doesn't seem to initialize the network connection on its own during the installation, forcing in the kickstart with the --onboot=on seems to do the trick. The --device flag may be unnecessary.

ladar commented 7 years ago

@wickedviking The mount in the snippet above is RHEL specific, and is required for RHEL installations because the network repos aren't accessible until you register the machine with the RHN. If the machine is registered, you are correct, those commands aren't needed. For example with my CentOS Kickstart config I pull in the packages via the network.

As for your suggestion above, I don't believe "pointing" at the MS docs is sufficient. The hard part isn't installing the drivers/daemons, as you're correct most distros include them. The hard part is getting Hyper-V builds to include the daemons during installation so that when the machine reboots, the provisioning process will execute automatically.

Notes on what's required for the various operating systems would be nice, but that would require quite a bit of work.

taliesins commented 7 years ago

I think I am going to leave this thread running. 50% of the issues people seem to have are related to this topic.

As far as I can tell there is nothing we can do from Packer's side.

sandersaares commented 7 years ago

To add some complication to the mix, if I install the Hyper-V packages onto an Ubuntu 16.04 guest, I see a difference in behavior between two hosts:

The second variation is a bit troublesome, as I cannot easily start it again from within the VM for obvious reasons. Yet Packer has no idea that anything is happening meanwhile, so there is no meaningful way to trigger it externally, either.

irab commented 7 years ago

No one has mentioned that you can just change the timeout with something like "ssh_timeout": "20m".

Also on CentOS 7.3 and 7.4 just ' reboot --eject' in the kickstart file by itself works for me to avoid the booting from the ISO on reboot.

tomconte commented 7 years ago

@ladar did you ever get this to work for Alpine by any chance? I am stuck waiting for the SSH IP address too.

ladar commented 7 years ago

@tomconte yes, I got packer to build Hyper-V images for Alpine 3.5.2 and 3.6.2. Try:

vagrant init generic/alpine35

or

vagrant init generic/alpine36

At this point I'm building 19 distros for 4 different providers (including Hyper-V)... see:

https://app.vagrantup.com/generic

The last holdout was OpenBSD, which I didn't get working until about a month ago (when v6.2 was released).

I say this with the caveat that I'm currently only testing whether vagrant up works properly on the VirtualBox and libvirt providers. I haven't had time to script/run the vagrant provisioning process on Hyper-V yet. I also haven't automated the testing process for the VMWare images, as I don't have a spare license for the VMWare plugin which I can dedicate to the build server. As such your mileage may vary. I've noticed that sometimes packer will build the image using different virtual hardware than what vagrant automatically provisions, which is what led to issues with some of the boxes.

ladar commented 7 years ago

@tomconte as I recall the Alpine magic was in the boot command. Try this bit of JSON:

{
    "type": "hyperv-iso",
    "name": "generic-alpine36-hyperv",
    "vm_name": "generic-alpine36-hyperv",
    "output_directory": "output/generic-alpine36-hyperv",
    "boot_wait": "30s",
    "boot_command": [
        "root<enter><wait>",
        "ifconfig eth0 up && udhcpc -i eth0<enter><wait>",
        "wget http://{{ .HTTPIP }}:{{ .HTTPPort }}/generic.alpine36.vagrant.cfg<enter><wait>",
        "sed -i -e \"/rc-service/d\" /sbin/setup-sshd<enter><wait>",
        "printf \"vagrant\\nvagrant\\ny\\n\" | setup-alpine -f generic.alpine36.vagrant.cfg && ",
        "mount /dev/sda3 /mnt && ",
        "echo 'PasswordAuthentication yes' >> /mnt/etc/ssh/sshd_config && ",
        "echo 'PermitRootLogin yes' >> /mnt/etc/ssh/sshd_config && ",
        "chroot /mnt apk add hvtools && chroot /mnt rc-update add hv_fcopy_daemon default && ",
        "chroot /mnt rc-update add hv_kvp_daemon default && chroot /mnt rc-update add hv_vss_daemon default && ",
        "umount /dev/loop0 && umount /dev/sr0 && eject /dev/cdrom && reboot<enter>"
    ],
    "disk_size": 32768,
    "ram_size": 2048,
    "cpu": 2,
    "http_directory": "http",
    "iso_url": "https://mirror.leaseweb.com/alpine/v3.6/releases/x86_64/alpine-virt-3.6.2-x86_64.iso",
    "iso_checksum": "92c80e151143da155fb99611ed8f0f3672fba4de228a85eb5f53bcb261bf4b0a",
    "iso_checksum_type": "sha256",
    "ssh_username": "root",
    "ssh_password": "vagrant",
    "ssh_port": 22,
    "ssh_timeout": "3600s",
    "shutdown_command": "/sbin/poweroff",
    "generation": 1,
    "skip_compaction": false,
    "enable_secure_boot": false,
    "enable_mac_spoofing": true,
    "enable_dynamic_memory": false,
    "guest_additions_mode": "disable",
    "enable_virtualization_extensions": false
}
it-praktyk commented 6 years ago

@ladar, where can I found the JSON files used to build generic boxes?

Thank you in advance.

mcandre commented 6 years ago

Dang, I wish Packer would do a better job helping Hyper-V users get setup for HTTP servers and preseeding. I tried allowing packer.exe through the Windows firewall, but I'm still seeing that the guest (Debian in my case) cannot connect to the HTTP server for preseeding.

ladar commented 6 years ago

@it-praktyk they are stored on a private git server. The ISOs are too large for GitHub, and since nobody has ever asked for them, I didn't think it worth the time to sanitize the repo and upload all of my files to GitHub. I'll attach the JSON file to this message, if that's all your after.

generic-hyperv.json.txt

it-praktyk commented 6 years ago

@ladar, thank you for sharing the file.

I'm very interested to cooperate in sanitization process - even in the private repo just now.

I think that creating some kind of a references platform to repetitive builds will be valuable for the community.

If you are interested in my proposal please let me know.

m-emelchenkov commented 6 years ago

@ladar, want to have sources too. I need Alpine Linux sources, want to build it for Parallels VM. Why not to publish it on GitHub, without ISOs?

ladar commented 6 years ago

@m-emelchenkov which sources? The shell scripts? Those are relatively boiler plate. If you meant Alpine Linux sources, those are at available https://alpinelinux.org/

I'm working on adding a Parallels version. I just need to get my hands on a sufficiently fast enough Mac before it can happen.

As for why the repo isn't on GitHub, I'd need to sanitize the history before I could upload it, as early versions of my scripts contain tokens/serial numbers (since moved to a .credentialsrc file), etc.

m-emelchenkov commented 6 years ago

@ladar Yes, I meant shell scripts. I already created my own box for Alpine with Parallels VM. It is not uploaded to Vagrant Cloud yet, beacuse I need testers first. It’s here (including .box binary): https://bitbucket.org/m-emelchenkov/vagrant-alpine.

mahsoud commented 6 years ago

I've also noticed a strange pattern in my ubuntu 16.04 image builds: hyperv on win server 2012 works great, but on ws2016 fails.

sandersaares commented 6 years ago

I recall Ubuntu newer than 16.01 had some issues with the Hyper-V integration services crashing on first boot, leading to Packer not being able to get the IP address. I am stuck on creating images with 16.01 because of this.

shurick81 commented 6 years ago

What I found is that the behavior depends on whether Hyper-V has pre-configured external virtual switch: image

When Packer does not find one, it automatically creates a virtual switch with "Internal network" type and this leads to SSH stuck.

shurick81 commented 6 years ago

This was tested with Packer 1.2.0 in Windows 10 by this project: https://github.com/chef/bento/blob/master/centos/centos-7.4-x86_64.json

ladar commented 6 years ago

For those who asked... I finally got around to sanitizing the commit history for my templates, and removed all the large files (like RHEL ISOs), and various tokens/license keys. Should anyone be inclined, the repo is available at: https://github.com/lavabit/robox/ aka my robot box building system. Feel free to suggest improvements.

ladar commented 6 years ago

@shurick81 I noticed what you described as well. The docs make it clear you need an "external switch" or packer will create one, but the switch packer creates is configured as internal, and thus doesn't work properly. The external switch requirement is documented though: https://www.packer.io/docs/builders/hyperv-iso.html#switch_name

And thus I considered it a separate issue from what normally prevents packer from discovering the guest IP address, which is the requirement that the guest have the "hyperv" guest tools/kernel modules.

Also note, that currently packer v1.2.4 and above no longer work properly with the FreeBSD/OpenBSD hyperv implementations. See this issue: https://github.com/hashicorp/packer/issues/6315

The update also broke the Ubuntu 18.04/18.10 install process without. provided you don't include the following in your auto-install script.

d-i pkgsel/upgrade select full-upgrade

This happens because the kernel image on the ISOs doesn't seem to work properly, but is fixed by the "full-upgrade" which will pull down and install newer kernel and cloud tool packages that work properly.

flmmartins commented 6 years ago

Hello All, I'm having issues with CENTOS guest. The instalation completes but Packer keep Waiting for SSH to become available.

I'm running packer 1.3.1 with Powershell and created the network and also did the tips suggested here.

`PS C:\cygwin64\home\flmmartins\workspace\my-packer-templates> C:\Users\flmmartins\packer.exe build .\centos7-x86_64-hyperv.json hyperv-iso output will be in this color.

Warnings for build 'hyperv-iso':

==> hyperv-iso: Creating build directory... ==> hyperv-iso: Retrieving ISO hyperv-iso: Found already downloaded, initial checksum matched, no download needed: http://mirror.serverbeheren.nl/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1804.iso ==> hyperv-iso: Starting HTTP server on port 8638 ==> hyperv-iso: Creating switch 'HyperVNAT' if required... ==> hyperv-iso: switch 'HyperVNAT' already exists. Will not delete on cleanup... ==> hyperv-iso: Creating virtual machine... ==> hyperv-iso: Enabling Integration Service... ==> hyperv-iso: Setting boot drive to os dvd drive C:\cygwin64\home\flmmartins\workspace\my-packer-templates\packer_cache\78c7586f1d53df7ffd07552c5f332442003e4f937d4949c9c97cf96bc42dbcbf.iso ... ==> hyperv-iso: Mounting os dvd drive C:\cygwin64\home\flmmartins\workspace\my-packer-templates\packer_cache\78c7586f1d53df7ffd07552c5f332442003e4f937d4949c9c97cf96bc42dbcbf.iso ... ==> hyperv-iso: Skipping mounting Integration Services Setup Disk... ==> hyperv-iso: Mounting secondary DVD images... ==> hyperv-iso: Configuring vlan... ==> hyperv-iso: Starting the virtual machine... ==> hyperv-iso: Attempting to connect with vmconnect... ==> hyperv-iso: Waiting 5s for boot... ==> hyperv-iso: Host IP for the HyperV machine: 192.168.178.14 ==> hyperv-iso: Typing the boot command... ==> hyperv-iso: Waiting for SSH to become available...`

JSON: { "variables": { "iso_url": "http://mirror.serverbeheren.nl/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1804.iso", "iso_check_type": "sha1", "iso_check": "13675c6f74880e7ff3481b91bdaf925ce81bda8f", "vmlinuz_file": "/images/pxeboot/vmlinuz", "initrd_file": "/images/pxeboot/initrd.img", "ks_file": "centos7-x86_64/ks.cfg", "hyperv_switch": "HyperVNAT" }, "builders": [ { "type": "hyperv-iso", "vm_name": "CentOS75", "iso_urls": "{{ user iso_url}}", "iso_checksum": "{{user iso_check}}", "iso_checksum_type": "{{user iso_check_type}}", "switch_name": "{{ user hyperv_switch}}", "communicator": "ssh", "cpu": 1, "disk_size": 20480, "generation": 1, "headless": false, "ram_size": 1024, "output_directory": "PCENTOS", "boot_command": [ " text {{user vmlinuz_file}} initrd={{user initrd_file}} inst.ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/{{user ks_file}}" ], "http_directory": "http", "boot_wait": "5s", "ssh_timeout": "20m", "ssh_username": "vagrant", "ssh_password": "vagrant", "ssh_port": 22, "shutdown_command": "sudo -S shutdown -P now" }] }

Kickstart:

`# RHEL7 Base Box Kickstart for VirtualBox and Vagrant

install cdrom lang en_US.UTF-8 keyboard us unsupported_hardware text skipx network --bootproto dhcp firewall --disabled auth --useshadow --enablemd5 rootpw --iscrypted $1XAC8Ni/Z5cY selinux --disabled timezone Europe/Amsterdam bootloader --location=mbr --driveorder=sda --append="crashkernel=auto rhgb quiet noipv6" services --disabled iptables,ip6tables --enabled sshd

zerombr clearpart --all --initlabel autopart firstboot --disabled eula --agreed services --enabled=NetworkManager,sshd reboot --eject user --name=vagrant --plaintext --password vagrant --groups=vagrant,wheel

%packages --ignoremissing --excludedocs @Base @Core @Development Tools @network-tools openssh-clients sudo openssl-devel readline-devel zlib-devel kernel-headers kernel-devel net-tools vim wget curl rsync ansible

%end

%post

Disable SELINUX per https://access.redhat.com/solutions/1237153

sed -i -e 's/(^SELINUX=)enforcing$/\1disabled/' /etc/selinux/config

yum update -y echo "vagrant ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers.d/vagrant sed -i "s/^.*requiretty/#Defaults requiretty/" /etc/sudoers

yum clean all

Enable hyper-v daemons only if using hyper-v virtualization

VIRT=dmesg | grep "Hypervisor detected" | awk -F': ' '{print $2}' if [[ $VIRT == "Microsoft HyperV" ]]; then mount /dev/cdrom /media cp /media/media.repo /etc/yum.repos.d/media.repo printf "enabled=1\n" >> /etc/yum.repos.d/media.repo printf "baseurl=file:///media/\n" >> /etc/yum.repos.d/media.repo

yum --assumeyes install eject hyperv-daemons
systemctl enable hypervkvpd.service
systemctl enable hypervvssd.service

rm --force /etc/yum.repos.d/media.repo
umount /media/

fi %end`

ladar commented 6 years ago

@flmmartins at some point over the last month, several of my Hyper-V builds started crashing during the post installation reboot. Shutting them down manually, and restarting fixed the issue. To preserve automation I had to add vga=792 to the kernel boot command. That fixed the issue. I then used the vga.sh script to remove that kernel parameter after updating the box. Feel free to look at my templates:

https://github.com/lavabit/robox/

ladar commented 5 years ago

Just an update on this issue. I've managed to overcome some of the issues people were facing through the use of a legacy network adapter (see #7128). And then overcome issues with the guest rebooting after install by changing the boot order (see #7147), leaving with me with a VM booted and ready for connections. Unfortunately the lack of Hyper-V daemon support chronicled above is blocking further progress. I tried working around that issue using pre-known IP addresses and the ssh_host config key, but ran into issue #4825. See my full report on that here.

SwampDragons commented 5 years ago

@ladar if we get #4825 fixed, will we eliminate the need for the Hyper-V daemons?