Open StanPlatinum opened 2 years ago
Hi Weijie,
Thanks for your interest in SEV-SNP and attestation!
The LAUNCH_MEASURE command only applies to legacy SEV and SEV-ES. With SEV-SNP, the Guest Owner supplies the expected launch measurement during launch (as part of the Identity Block discussed in issue8) in SNP_LAUNCH_FINISH. The SEV firmware will calculate a measurement of the guest memory state and compare it with the expected value. If the values do not match, then the SNP_LAUNCH_FINISH command will fail and the hypervisor will not be able to start the guest. (The VMRUN instruction will fail.)
That brings us to your question: how does the Guest Owner calculate the expected measurement value? The simplest way is to launch the guest image on the Guest Owner's infrastructure first and generate an attestation report that contains the measurement by the firmware. The Identity Block is optional, so the Guest Owner can launch the VM without an Identity Block to determine the expected measurement, then construct an Identity Block for use when launching the VM on untrusted cloud infrastructure.
Of course, if the Guest Owner doesn't have access to an AMD EPYC machine, then they will have to calculate the expected measurement themselves. I've been thinking about releasing a tool in this repository to do that calculation.
Sincerely, Jesse
Thanks, Jesse!
Can we use sev-tool to do the calculation or something related? Or AFAIK the sev-tool is only for SEV and SEV-ES, not for SNP?
Best, Weijie
For cloud platforms, it will be up to the providers to provide something like the trusted computing group's reference integrity manifest files or something of that ilk. Reference measurements are still not quite standardized for the virtual platform space, but I'm hoping more folks will try to incorporate what the Trusted Computing Group has already specified for physical platform manufacturers.
For something of OVMF, you're going to get the binary itself loaded into the top of the 4GiB range of memory, but there are a few sections that are designated by GUID that are specially marked for the virtual machine monitor to measure and populate differently. Specifically you'll have "verified" data (unmeasured data that will be verified at run time), secret data (populated by the AMD-SP), CPUID data (populated by the VMM according to the platform in a potentially measurement-destabilizing way, so instead it's checked for partial correctness when in comes to security-critical features.
I of course can't commit to any timelines, I don't speak for my employer, etc, but I'm hopeful that virtual platform providers will follow suit and provide signed reference measurements of the data they introduce into users' trusted computing base.
I would add that "guest image" will not include your guest operating system, since that is dynamically loaded by the UEFI. You'll need to have some kind of TPM to measure those dynamic components along the way. Cloud providers don't really have confidential TPMs yet, so you'll get AMD attestation of the firmware volume, after which point you'll need to trust that it does the right thing.
There's still plenty to do with this technology to get platform providers out of the TCB.
@deeglaze Thanks for the explanation!
If I understand correctly, we can get the measurement of OVMF + guest kernel (things need in /boot, e.g., initrd, vmlinuz, etc.) from the attestation report, right? The other parts (such as data in the guest's image) can be measured afterward, probably by a standard tool in the future?
@StanPlatinum you can only get the measurement of OVMF from the attestation report. SEV-SNP doesn't know what the virtual firmware will do. You trust it from then on. To get more trust in the firmware's actions, you can enable UEFI secure boot (trusting your VMM's handling of db/dbx/kek/pk EFI variables for acceptable EFI module signers) and measured boot with TPM measurements. A VM's TPM is going to likely be an IBM virtual TPM that the VMM provides through unencrypted MMIO.
When you get to the linux guest context, you can attest to the firmware with sev-guest, and attest to the rest of the boot with /dev/tmp0, assuming you have the vTPM driver.
For a TPM to maintain its security properties, it must be in memory entirely inaccessible to the guest. There are a couple ways one can handle this:
So since IBM's vTPM software expects to be running in an environment with access to a kernel through libc, we're going to need vTPM implementations to adapt to the more restricted execution environment of pre-boot land. So. Trust! Trust will gradually go to zero, but it won't be zero when SEV-SNP guests are first available to play with from an upstream kernel.
Edit: standard tools, yes maybe. I'm writing code that I hope to get approved to open source. The issue with this new world is that there isn't yet an agreed-upon standard way for platform providers to give users "expected" measurements of the firmware they provide. I'd like to propose that platform providers just use TCG RIMs. Every platform provider will have their own software id "tag" to describe them as an entity, and that'll go along with a UEFI's (unmeasured) TPM event log entry of kind TCG_SP800_155_PlatformId_Event2 to point event log auditors to where they can find the associated reference integrity measurements that should be signed by the platform provider. I say this isn't yet standard since platform certificates and RIMs are meant for physical platforms that ship from an OEM to a consumer for them to check at their convenience.
Getting a certificate is pretty much "it shipped on a pre-imaged harddrive" or "it's available online at some Uri". If you're a VM platform provider, you don't want to have to reimage guests' boot or EFI system partition to include your virtual firmware certificate, especially if you make regular updates. You don't want it to only be online since then it's an availability issue to be sure you have access. So I think we're going to see some creativity on highly available delivery methods in cloud spaces until or unless they all come together and try to standardize on a delivery method.
Yeah the firmware will be a flavor of OVMF, but if you don't emulate SMM due to its bad security history, you'll have some proprietary interface to the VMM in the uefi that'll be annoying to get open-sourced. Then you have the issue of the source getting compiled by the same compiler that builds it for production, and/or you build "in the open" with a known toolchain that you also protect the execution context of (http://slsa.dev).
Lots of conversations here, trying to do right by users, and also trying to not solve the right problems at the wrong time (it needs to work reliably before we start talking about trusting all of the exact bits that cloud users already have been trusting for years).
@deeglaze Many thanks!
It takes some time to read your explanation and I am still doing it.
"you can only get the measurement of OVMF from the attestation report ... and attest to the rest of the boot with /dev/tmp0"
Yet I found a patch set (https://www.mail-archive.com/qemu-devel@nongnu.org/msg851132.html) that enables verifying the hash of kernel/initrd/cmdline. I still do not know how grub can be verified, though.
If I understand correctly, you want to solve a dilemma where the guest owner knows the hash of OVMF but doesn't know if this should be trusted since OVMF provided by the cloud doesn't expose the implementation details. And TPM/vTPM could be a good solution to that, right?
Thanks, Weijie
@StanPlatinum I wasn't aware of that patchset. I work on a different hypervisor than Qemu, and haven't done much with the Linux.efi direct boot option from ovmf. This will require the VMM to write the bzImage hash to a measured page at launch to be checked before running the guest. That could work provided you're using direct boot to Linux. And you're wondering where you could be sure that was measured right from the attested measurement. That'll be through the data pointed to by the GUIDs in the patch that introduced sev_add_kernel_loader_hashes. If you're trying to reconstruct the measurement just from the ovmf binary, linux binary, and kernel cmdline, you'll need to do a little bit of OVMF binary parsing to reconstruct some of the pages that are populated after the whole OVMF binary is loaded at the top of the first 4GiB of memory. Qemu does its own parsing to find where to populate those values.
You'll also need your initramfs to be overlayed with dm-verity or similar filesystem integrity option. Usually dm-verity gets its root hash from the mounter through fstab, but initrd's hash can be given on the kernel cmdline that OVMF establishes. Since it's part of the cmdline, it's also included on that measured page of hashes.
The next tricky thing is that if you have any kernel modules that get loaded at boot, then you'll need those to be stored on the same filesystem with integrity AND use the LoadPin driver to make extra sure they're not sneakily loading module dependencies from another filesystem that might not be integrity checked.
As for verifying grub this way, that wasn't what I was thinking. I'm more coming from a "verify OVMF with AMD, verify the rest through a trusted dynamic measurement engine." There are a couple dynamic measurement schemes, the most well-developed is TCG's TPM 2.0 specification paired with TCG's firmware integrity measurement profile (FIM). The FIM specifies that the firmware should register particular meaurements with the TPM ak different stages of UEFI boot, including UEFI secure boot variables (db, dbx, kek, pk) that carry acceptable and rejected certificates for efi module signatures (you can build all your modules into the main image to not care much about this), but later the FIM requires that basically every step up to the guest os boot has to be measured into a particular register on the TPM. That includes ovmf measuring Redhat's secure boot shim, the shim measuring grub, and grub measuring the OS and kernel-cmdline. All those software components detect a TPM's presence and send measurements to it over MMIO.
So that pathway has a suspension of disbelieve and leap of trust into the TPM, which in a cloud will be virtualized (software-based virtual device, not a physical TPM) for the convenience of not needing to care specific which machine you're deployed on. A virtual device won't be measured. You could isolate ibmswtpm2 (BSD licensed) in VMPL0 (use some form of MdePkg and CryptoPkg instead of the libc and Openssl it currently needs?) and run UEFI and Linux in VMPL3, but that's a whole can of worms to get Linux to talk to VMPL0 through some new protocol. Brijesh's patches to Linux right now only boot if Linux is run in VMPL0. So more development is needed there.
@StanPlatinum I wasn't aware of that patchset. I work on a different hypervisor than Qemu, and haven't done much with the Linux.efi direct boot option from ovmf. This will require the VMM to write the bzImage hash to a measured page at launch to be checked before running the guest. That could work provided you're using direct boot to Linux. And you're wondering where you could be sure that was measured right from the attested measurement. That'll be through the data pointed to by the GUIDs in the patch that introduced sev_add_kernel_loader_hashes. If you're trying to reconstruct the measurement just from the ovmf binary, linux binary, and kernel cmdline, you'll need to do a little bit of OVMF binary parsing to reconstruct some of the pages that are populated after the whole OVMF binary is loaded at the top of the first 4GiB of memory. Qemu does its own parsing to find where to populate those values.
Those patches were originally written for SEV-ES where the boot flow is less flexible than SNP; although I think they should work for SNP as well.
You'll also need your initramfs to be overlayed with dm-verity or similar filesystem integrity option. Usually dm-verity gets its root hash from the mounter through fstab, but initrd's hash can be given on the kernel cmdline that OVMF establishes. Since it's part of the cmdline, it's also included on that measured page of hashes.
Note that the kernel-hashes includes the measurement of the initrd as well, so it doesn't need to worry too much about adding extra integrity on top of the initrd.
The next tricky thing is that if you have any kernel modules that get loaded at boot, then you'll need those to be stored on the same filesystem with integrity AND use the LoadPin driver to make extra sure they're not sneakily loading module dependencies from another filesystem that might not be integrity checked.
As for verifying grub this way, that wasn't what I was thinking. I'm more coming from a "verify OVMF with AMD, verify the rest through a trusted dynamic measurement engine." There are a couple dynamic measurement schemes, the most well-developed is TCG's TPM 2.0 specification paired with TCG's firmware integrity measurement profile (FIM). The FIM specifies that the firmware should register particular meaurements with the TPM ak different stages of UEFI boot, including UEFI secure boot variables (db, dbx, kek, pk) that carry acceptable and rejected certificates for efi module signatures (you can build all your modules into the main image to not care much about this), but later the FIM requires that basically every step up to the guest os boot has to be measured into a particular register on the TPM. That includes ovmf measuring Redhat's secure boot shim, the shim measuring grub, and grub measuring the OS and kernel-cmdline. All those software components detect a TPM's presence and send measurements to it over MMIO.
There are a couple of other tricks people are trying to do Grub measurements in different ways;
IBM have a grub built into ovmf patchset: https://listman.redhat.com/archives/edk2-devel-archive/2020-November/msg00969.html (Again, for ES originally) Microsoft have Grub patches that do TPM decryption in grub itself: https://lists.gnu.org/archive/html/grub-devel/2022-02/msg00006.html
So that pathway has a suspension of disbelieve and leap of trust into the TPM, which in a cloud will be virtualized (software-based virtual device, not a physical TPM) for the convenience of not needing to care specific which machine you're deployed on. A virtual device won't be measured. You could isolate ibmswtpm2 (BSD licensed) in VMPL0 (use some form of MdePkg and CryptoPkg instead of the libc and Openssl it currently needs?) and run UEFI and Linux in VMPL3, but that's a whole can of worms to get Linux to talk to VMPL0 through some new protocol. Brijesh's patches to Linux right now only boot if Linux is run in VMPL0. So more development is needed there.
Yes, however your VMPL0 can in theory emulate a lot of stuff so that the Linux running at VMPL3 doesn't know; I think it can trap VTPM acccesses for example so that you don't have to rewrite every vtpm access.
I'm not sure how VMPL0 would emulate MMIO when the guest OS owns the page tables.
Thank you all for the discussion, @deeglaze @dagrh .
Some updates here (IBM and Qemu have the support to measure OVMF + kernel + initrd + cmdline on SEV-SNP):
https://github.com/IBM/sev-snp-measure , and https://github.com/AMDESE/AMDSEV/issues/93 .
But if the guest wants to check the measurement of the guest image and the user application at boot time, there will be a lot of future work to do.
The guest image can be transferred from the guest owner to the secure guest VM after boot via an established secure channel. So I guess the trust (of guest image and the upper user applications) can be built from then on...
And since SEV-SNP can do the runtime attestation (using the SNP_GUEST_REQUEST command, 512 bits of arbitrary data to be included in the report), I wonder if we can use this field, which can be the guest image's hash measurement and be verified by the guest owner.
FYI, eventually hash of kernel + kernel command line + initrd are successfully included in the measurement. The SEV-Tool provides a reference to verify.
Dear Jesse,
Thanks for maintaining this repo! I am still curious about the attestation flow of SEV SNP...
It's pretty clear that the attestation report will be obtained via the
/dev/sev-guest
, after your kind explanation of the previous issue8 and issue9.However, it seems still unclear that the measurement of Guest image + OVMF was not used in the demo. In SEV Secure Nested Paging Firmware ABI Specification, you can find some descriptions like "The measurement is keyed with the TIK so the guest owner can use the measurement to verify that the guest was properly launched without tampering." (In Sec. 6.5 LAUNCH_MEASURE.) So I wonder how does the Guest Owner get this measurement and how does the Guest Owner verify it? Should this procedure be included in this repo, in the near future?
Thanks!