coreos / afterburn

A one-shot cloud provider agent
https://coreos.github.io/afterburn/
Apache License 2.0
202 stars 102 forks source link

qemu: implement boot-time checkin (via ovirt-guest-agent protocol) #458

Open cgwalters opened 4 years ago

cgwalters commented 4 years ago

Currently if FCOS fails in very early boot (e.g. in the bootloader or kernel, before switching to the initramfs), it's...hard to detect consistently. For example, we have a test for Secure Boot, but it turns out that if the kernel fails to verify then...coreos-assembler today hangs for a really long time.

We can scrape the serial console in most cases, but we configure the Live ISO not to log to a serial console...so we'd end up having instead to do something like OpenQA and do image recognition on the graphics console :cry:

Now we debated this somewhat in https://github.com/coreos/ignition-dracut/pull/170 and I argued strongly that the most important thing was to cover the "failure in initramfs" case, and we could support the "stream journal in general" by injecting Ignition.

In retrospect...I think I was wrong. It would be extremely useful for us to stream the journal starting from the initramfs at least by default on qemu.

In particular, what we really want is some sort of message from the VM that it has entered the initramfs, but before we start processing Ignition. If we're doing things like reprovisioning the rootfs, it becomes difficult to define a precise "timeout bound". But I think we can e.g. reliably time out after something quite low (like 10 seconds) if we haven't seen the "entered initramfs" message.

So here's my proposal:

lucab commented 4 years ago

[Tackling only one bit out of the larger context]

In particular, what we really want is some sort of message from the VM that it has entered the initramfs, but before we start processing Ignition.

This sounds not very different from Azure boot check-in, which we are performing in a non-homogeneous way today (RHCOS does it in initramfs, FCOS does is after boot-complete.target).

We could think about consolidating it in a "first-boot initramfs reached" check-in across various platforms in Afterburn, to be run before Ignition. However:

cgwalters commented 4 years ago

(forwarding some real time discussion here)

I think you're absolutely right - we should think of this as "add a first-boot checkin to our qemu model", since that model already exists on some clouds.

And then further discussion turned up https://wiki.qemu.org/Features/GuestAgent - so we could implement the minimum there in Afterburn, and have coreos-assembler time out if the guest doesn't reply to a sync pretty early on.

this places a lower bound in the initramfs service ordering which is somewhere after the network is up

We can make the afterburn checkin not require networking on qemu, but I'm not very concerned about this TBH because qemu networking is quite fast.

cgwalters commented 4 years ago

OK, moved this issue to afterburn.

I took a quick look at implementing this. I'd like to propose that we have afterburn run itself as a systemd generator early on startup, rather than shipping static units; this would give us a clean way to order our guest startup unit invocation After=/dev/virtio-ports/agent (And we could also tweak the unit to not be After=network-online.target for the qemu case, etc.)

lucab commented 4 years ago

And then further discussion turned up https://wiki.qemu.org/Features/GuestAgent - so we could implement the minimum there in Afterburn, and have coreos-assembler time out if the guest doesn't reply to a sync pretty early on.

That doesn't look like a great fit. In particular, the protocol is unidirectional (host->guest) which means that we have to sit in initramfs waiting to be polled (instead of actively signaling a guest->host event, like we do on Azure and Packet), and we can't really leave the initramfs until we have been polled. The amount of time we should be sitting there is somehow arbitrary, I guess. In the vast majority of cases there won't be anything polling, so we would be now delaying most instances first boots on qemu.

Perhaps a more suitable protocol to target would be the ovirt-guest-agent one, which seems to support sending guest->host events. Conveniently, it already defines a system-startup event.

cgwalters commented 4 years ago

You're right, once I started working on the code I noticed the inversion of control.

Discussing the oVirt protocol though gets into the much bigger topic of whether we want to try to implement more of the protocol for real as an agent on that platform, and how the platform would behave with our likely-to-be-a-subset of the functionality.

I guess as a start we could just respond to the channel on ignition.platform.id=qemu and sidestep that though.

lucab commented 4 years ago

Additional note: on Azure the firstboot check-in also ejects the Virtual CD (paging @darkmuggle for confirmation), so I fear we cannot really check in before Ignition fetching is completed.

cgwalters commented 4 years ago

Additional note: on Azure the firstboot check-in also ejects the Virtual CD (paging @darkmuggle for confirmation), so I fear we cannot really check in before Ignition fetching is completed.

Right, this comment proposes making our systemd units platform-dependent.

(Also we discussed a while changing Ignition on Azure to save the config to /boot or so)

bgilbert commented 4 years ago

Right, this comment proposes making our systemd units platform-dependent.

We don't need to do it as a generator, though, right? We can just ship some static units with ConditionKernelCommandLine=ignition.platform.id=X.

cgwalters commented 4 years ago

We don't need to do it as a generator, though, right? We can just ship some static units with ConditionKernelCommandLine=ignition.platform.id=X.

Yeah; the duplication there might get ugly but OTOH I guess we could generate the units statically i.e. as part of the build process.

AdamWill commented 4 years ago

@cgwalters pointed me to this ticket, so for the record, as he knows, in the last week I've been working on running openQA tests on Fedora CoreOS. It's not very difficult, and we have it working already, the only question mark for a 'production' deployment would be when and on what to trigger the tests.

openQA definitely does do a fairly good job of letting you know if the artifact under test boots successfully in a VM.

For now the work is living on a branch of the openQA test repo and is only deployed on my pet openQA instance which is not up all the time (it heats up my office...:>), we can do a production deployment quite easily once the triggering questions are sorted out.

cgwalters commented 4 years ago

openQA definitely does do a fairly good job of letting you know if the artifact under test boots successfully in a VM.

To be clear kola already covers this pretty well in general - we just have a few specific gaps, such as the case when a Secure Boot signature validation fails.

AdamWill commented 3 years ago

for the record once more, we did deploy the openQA CoreOS testing stuff to production. The scheduling works by just checking once an hour if any of the named streams has been updated, and scheduling tests for the new build if so. Results show up at https://openqa.fedoraproject.org/group_overview/1?limit_builds=100 , e.g. here. We can write/run more tests if desired, requests can be filed at https://pagure.io/fedora-qa/os-autoinst-distri-fedora/issues .