riscv / riscv-aia

https://jira.riscv.org/browse/RVG-59
Creative Commons Attribution 4.0 International
81 stars 19 forks source link

How to distinguish APLIC interrupt in MSI mode? #68

Closed X547 closed 8 months ago

X547 commented 8 months ago

TOPI, CLAIMI registers used to identify APLIC interrupt to be serviced are available only in direct mode, but not in MSI mode. How is it supposed to identify interrupt in MSI mode? Scan SETIP[0..31] bits? Allocate MSI interrupt vector for each APLIC interrupt? If second, is it supposed to preallocate MSI interrupt vectors for all APLIC interrupts (maximum possible interrupt count as specified in ACPI/FDT)? If so, it looks wasteful to spend limited number of MSI interrupts to all APLIC interrupts, most of them may be unused. Also what is supposed to happen if APLIC have more interrupts than IMSIC? Maybe MSI interrupt vectors should be allocated dynamically for used APLIC interrupts? If so, it can complicate software design and need special RISC-V only handling that do not fit to generic architecture.

jhauser-us commented 8 months ago

The mapping of APLIC interrupts to IMSIC interrupt identities at a RISC-V hart is very similar to how x86 computers map interrupts from their "I/O APIC" to a "local APIC" at an x86 core. The RISC-V APLIC and IMSIC perform roles similar to the x86 I/O APIC and local APIC, respectively. All of the options you listed are possible.

If you have more than one or two RISC-V harts (or you want to support that possibility in the future), I would say your best choice for utilization is to dynamically allocate IMSIC interrupt identities at each hart for the subset of APLIC interrupts that get assigned to that hart. I don't see a reason for these allocations to change more often than about once every 100 ms, and probably less often. But if a particular operating system written previously for x86 systems doesn't support such dynamic allocation, then I suppose you will face the same inefficiencies that the software does on x86 systems.

I can see only two reasons for needing new RISC-V-only handling that doesn't fit an existing software architecture. First, when porting from x86-world, you might want to improve the allocation of interrupt identities on your RISC-V system compared to what was previously done for x86. In that case, one might ask why the shared software architecture couldn't be upgraded to improve interrupt handling for both x86 and RISC-V.

Probably a more likely reason to need special RISC-V handling would be because the original software comes from a different type of system, such as an Arm machine, that does not have the same kind of MSI model. If porting Arm-world software is the problem, all I can tell you is that the authors of the RISC-V Advanced Interrupt Architecture intentionally rejected Arm's model for handling MSIs, judging the hardware overly complex for the task.

Before the AIA was ratified, the people who port Linux to RISC-V had a couple of years to complain if the AIA's model doesn't fit well with Linux, but I heard no such complaints. As far as I know, it hasn't been a significant obstacle for RISC-V Linux, perhaps due to Linux's origins on x86 machines.

Is there specific software you are trying to port to RISC-V that has an architecture you feel is incompatible with the RISC-V AIA?

avpatel commented 8 months ago

TOPI, CLAIMI registers used to identify APLIC interrupt to be serviced are available only in direct mode, but not in MSI mode. How is it supposed to identify interrupt in MSI mode? Scan SETIP[0..31] bits? Allocate MSI interrupt vector for each APLIC interrupt? If second, is it supposed to preallocate MSI interrupt vectors for all APLIC interrupts (maximum possible interrupt count as specified in ACPI/FDT)? If so, it looks wasteful to spend limited number of MSI interrupts to all APLIC interrupts, most of them may be unused. Also what is supposed to happen if APLIC have more interrupts than IMSIC? Maybe MSI interrupt vectors should be allocated dynamically for used APLIC interrupts? If so, it can complicate software design and need special RISC-V only handling that do not fit to generic architecture.

For APLIC MSI mode, the Linux approach is to allocate a IMSIC ID for a APLIC interrupt source only when a client driver does request_irq(). In other words, APLIC interrupt source to IMSIC ID mapping is on-demand and not pre-allocated. Further, the IMSIC driver allocates ID for each CPU separately so for N CPUs and M IDs we have total NxM IDs to allocate from so the number of APLIC interrupt sources can be more than number of IDs on one CPU but it should be less than total IDs across all CPUs.

Please refer the latest Linux AIA drivers will be most likely merged in Linux-6.10. (Link: https://lore.kernel.org/linux-arm-kernel/20240307140307.646078-1-apatel@ventanamicro.com/)

X547 commented 8 months ago

(Link: https://lore.kernel.org/linux-arm-kernel/20240307140307.646078-1-apatel@ventanamicro.com/)

Any Github etc. branch for it? Sorry, I am not familiar at all with Linux development workflow.

UPDATE: https://github.com/avpatel/linux/tree/riscv_aia_v17

X547 commented 8 months ago

If you have more than one or two RISC-V harts (or you want to support that possibility in the future), I would say your best choice for utilization is to dynamically allocate IMSIC interrupt identities at each hart for the subset of APLIC interrupts that get assigned to that hart.

Further, the IMSIC driver allocates ID for each CPU separately so for N CPUs and M IDs we have total NxM IDs to allocate from

Thanks for the answers. I will try to use dynamic per-hart IMSIC interrupt allocation in my system. It will need some redesign as it was originally designed for x86 and do not have any interrupt request hooks for interrupt controller driver interface, only enable/disable interrupt hook that should be able to be called with interrupts disabled.