We implement support for Software Generated Interrupts (SGI) which is ARM-speak for IPI. The for the most part we just had to modify a few MMIO registers in the GICv2 driver. It even passes the kernel's ipi_test! The only changes to the generic kernel code was to ipi_exec to optimize the processing of IPI's sent to the BSP when we were also the BSP. This was needed because we ran into some sort of correctness bug where the kernel would hang because of these lines: https://github.com/twizzler-operating-system/twizzler/blob/774e1a0b013b00a01c73cd60526b15a8a23f709f/src/kernel/src/processor.rs#L461-L468
The issue is that interrupts are disabled when entering ipi_exec and are only enabled after arch::send_ipi(target, GENERIC_IPI_VECTOR); is executed. This causes us to wait in a loop for the interrupt state to change in the case where we send an SGI to ourselves (BSP runs kernel_test functions), and never receive the interrupt. I have no idea how x86_64 gets around this, but it seems to not cause any issues. The workaround is the optimization described above, and both x86_64 and aarch64 pass ipi_test.
We implement support for Software Generated Interrupts (SGI) which is ARM-speak for IPI. The for the most part we just had to modify a few MMIO registers in the GICv2 driver. It even passes the kernel's
ipi_test
! The only changes to the generic kernel code was toipi_exec
to optimize the processing of IPI's sent to the BSP when we were also the BSP. This was needed because we ran into some sort of correctness bug where the kernel would hang because of these lines: https://github.com/twizzler-operating-system/twizzler/blob/774e1a0b013b00a01c73cd60526b15a8a23f709f/src/kernel/src/processor.rs#L461-L468The issue is that interrupts are disabled when entering
ipi_exec
and are only enabled afterarch::send_ipi(target, GENERIC_IPI_VECTOR);
is executed. This causes us to wait in a loop for the interrupt state to change in the case where we send an SGI to ourselves (BSP runskernel_test
functions), and never receive the interrupt. I have no idea how x86_64 gets around this, but it seems to not cause any issues. The workaround is the optimization described above, and both x86_64 and aarch64 passipi_test
.Summary