Closed lersek closed 8 years ago
After briefly reviewing
gitk -- ArmPlatformPkg/ArmVirtualizationPkg/PciHostBridgeDxe
gitk -- ArmVirtPkg/PciHostBridgeDxe
I think the only real hurdle is PcdKludgeMapPciMmioAsCached
(commit f9a8be423cdd). I have no idea how to keep that hack with the core host bridge driver. :(
Maybe we'll postpone this until after we have a virtio-gpu driver, which should make VGA and PcdKludgeMapPciMmioAsCached
unnecessary.
To be honest, I am not as convinced as I was before that KVM is doing the right thing here. This has nothing to do with DMA being coherent or not, since the BARs don't describe memory. So I now think that it is a bug for KVM to back anything that it exposes via PCI MMIO BARs with cacheable memory without keeping it synchronized between the guest and the host.
Well, if it can be fixed in KVM, I won't object! :)
The priority of this issue is increasing. 64-bit MMIO is really useful for virtio-pci devices (the modern ones, i.e., with disable-legacy=on,disable-modern=off
). I wonder if we could keep PcdKludgeMapPciMmioAsCached
in our new PciHostBridgeLib
somehow.
(BTW, I retested VGA on my Mustang just now, with PcdKludgeMapPciMmioAsCached=FALSE
, and yes, it does break.)
It is expected that the kludge PCD is still needed, unfortunately.
I don't think updating the core PciHostBridgeDxe driver to accommodate cached mappings for PCI regions is not the way to go. We really need to fix this in KVM instead.
I recently hazarded to call this issue an "ARMv8 architecture bug" internally, expecting to be beaten down with arguments. To my surprise, I saw someone confirm my words, stating that the ARMv8 designers must not have been thinking with their "virt hats on" (I'm paraphrasing).
If that's the case, can we really expect KVM to work around an architectural bug? Shouldn't we request new silicon / chipset releases from hw vendors instead, and/or an architectural update from ARM?
I mean, can a solution be conceived at all for KVM, with acceptable performance?
Generally speaking, you could call it an oversight that the stage 2 mappings can only supersede guest stage 1 mappings in a way that makes them more strict. I think I may have heard a justification at some point that sounded plausible, but I am not able to reproduce it.
However, that does not mean that KVM is not broken. The fact that KVM has a cached mapping of a memory region that it can reasonably expect to be mapped uncached by the guest already violates the (given) architecture, although you could argue that those mappings are not live at the same time. But having KVM use uncached mappings for such regions would address both issues.
Okay. Can you champion this on kvm-arm? I wouldn't know where to begin. :)
(I don't mean patches -- I mean convincing others to write patches. :))
Most recent kvm-arm discussion: http://thread.gmane.org/gmane.comp.emulators.kvm.arm.devel/7113
This issue is now tracked by https://tianocore.acgmultimedia.com/show_bug.cgi?id=65
The main customization in
ArmVirtPkg/PciHostBridgeDxe
is that it emulates IO port accesses with an MMIO range, whose base address is parsed from the DTB.The core PciHostBridgeDxe driver delegates the IO port access implementation to EFI_CPU_IO2_PROTOCOL. @ardbiesheuvel recently implemented
ArmPkg/Drivers/ArmPciCpuIo2Dxe
(in commit 3dece14502b9) which provides this protocol, backed by the same kind of translation as described above. The base address comes fromgArmTokenSpaceGuid.PcdPciIoTranslation
.Therefore:
ArmVirtPkg/Library/FdtPciPcdProducerLib
plugin library to locate the IO translation offset in the DTB, and store it in the above PCD.ArmPciCpuIo2Dxe
in the ArmVirtQemu builds, plugging the above library into it.PCI_ROOT_BRIDGE
object, populating it with the FTD Client code we are currently using inArmVirtPkg/PciHostBridgeDxe
.Extra benefit: the core PciHostBridgeDxe driver supports 64-bit MMIO, we just have to parse those values from the DTB as well.