Open GabrielMajeri opened 4 years ago
Possible clue for current CI failures, but does not explain the one you observed previously where stripping out all test code wouldn't resolve the problem: https://github.com/rust-osdev/uefi-rs/pull/114#issuecomment-573542870 .
Spent some time today investigating this. I wasn't able to locate the root issue (which is probably deep inside qemu) but my findings should hopefully help anybody willing to look into this further:
Locally on my Arch Linux system with KVM enabled the test runner finishes the tests sucessfully but fails with error code -11
during shutdown. (I guess this is what @GabrielMajeri experienced.) This is Python's way of telling us that it received signal 11
which is SIGSEGV
. This can also be confirmed by attaching gdb to qemu which shows that a segfault occurs here:
[#0] 0x555555c90fcc → cmp QWORD PTR [rdx+0x4178], rdi
[#1] 0x555555c96331 → bdrv_unref_child()
[#2] 0x555555c96958 → bdrv_unref()
[#3] 0x555555c96958 → bdrv_unref()
[#4] 0x555555ce3a1a → blk_remove_bs()
[#5] 0x555555ce3a65 → blk_remove_all_bs()
[#6] 0x555555c92db3 → bdrv_close_all()
[#7] 0x5555558feb55 → main()
This is qemu 4.2.0 from Arch's package repository. A from-source build of qemu 5.0.0-rc0 didn't show this issue so hopefully this resolves itself in a month.
Ubuntu 18.04, which is used by CI, has an ancient qemu 2.11. I've setup a local VM and can observe the same behavior as in the CI runs. That is an endless boot loop which eventually fails with a spurious serial port error in an earlier test. The test causing this is test_switch_bsp_and_who_am_i
(specifically this call) but I'm unable to tell what the issue is. With a source build of qemu 4.2.0/5.0.0-rc0 it doesn't boot loop but instead fails with:
ERROR:/home/vagrant/qemu-5.0.0-rc0/cpus.c:1727:qemu_tcg_cpu_thread_fn: assertion failed: (cpu->halted)
The interesting thing here is that with KVM enabled (by adding the user to the kvm
group), it works with both old and new qemu versions. And sure enough, removing the accel=kvm
qemu flag on Arch (bare-metal and VM) results in the same assertion failure.
So in summary the problem probably lies somewhere in qemu's TCG or in its interaction with OVMF's code which I don't know how to debug. In uefi-rs's CI runs Ubuntu's old qemu version boot loops instead of panicking. (Unfortunately Github Actions doesn't seem to support nested virtualization so we can't just use KVM to save us here.)
It'd be interesting to check whether the same test written in C also exhibits the problem. Just to rule out any funny business with uefi-rs.
Following up on the previous comment, I've verified that behavior is still the case with recentish QEMU and OVMF. With OVMF debug output enabled, there's an interesting difference between the KVM case and the TCG case:
# KVM:
MaxCpuCountInitialization: CmdData2=0x0
MaxCpuCountInitialization: BootCpuCount=4 mMaxCpuCount=4
# TCG:
MaxCpuCountInitialization: CmdData2=0x0
MaxCpuCountInitialization: QEMU v2.7 reset bug: BootCpuCount=4 Present=0
MaxCpuCountInitialization: BootCpuCount=0 mMaxCpuCount=1
That "QEMU v2.7" string is hardcoded here, I'm actually testing on QEMU 6.1.0. It seems OVMF sees conflicting data about the number of CPUs, so it hits that branch and resets to one CPU. I'm not sure if this is actually a bug in QEMU or OVMF, or maybe both. The next step here is probably to ask about it on the edk2-devel mailing list.
For future reference, I did report the issue to edk2-devel: https://edk2.groups.io/g/devel/message/87303
Hopefully the issue doesn't get lost there; unfortunately they don't allow new accounts on their bug tracker for the general public (I asked and they said to use the mailing list instead), so that's the best we can do for now.
Minor update: I was hoping that this issue would be fixed by https://github.com/tianocore/edk2/pull/3935. However, I have tested with edk2-stable202305
and a local build of latest qemu git (71934cf6bf878f82dac3fceb7d06d293ec3f6f8f), and still see the same assert:
ERROR:../accel/tcg/tcg-accel-ops-mttcg.c:110:mttcg_cpu_thread_fn: assertion failed: (cpu->halted)
There is a multiprocessor test which, if enabled, prevents CI from running all the tests correctly. We should investigate how to fix it.
@Bobo1239 has looked into the cause for this: https://github.com/rust-osdev/uefi-rs/issues/103#issuecomment-604728460