Closed donRumata03 closed 1 year ago
Do the considerations above make sense and if they do, should the barriers be added for the «potential fancy future cortex m implementation with superscalar with blackjack and hookers» and should all the perfectly real users suffer from additional instruction(s)?
I imagine this won't be very popular if it's not actually needed in today's cortex-m processors.
As far as I understand, an interrupt might fire after memorizing previous interrupt state but before the interrupt disabling takes effect. In that case, any changes in the interrupt flag made by the interrupt would be discarded…
The convention is interrupt handlers can use critical sections themselves, but must always restore the interrupt state. Assuming that, it's fine if the interrupt fires after reading the interrupt state because it won't have changed.
(AFAIK this requirement on user codeis not documented anywhere? it probably should)
Thanks for your response!
Hmm, maybe an appropriate place to document the requirement is a reason for interrupt::disable
's unsafety (since in interrupts the function should be paired with enable), but it's actually marked as safe so the change would be breaking...
By the begging of an interrupt interrupts are guaranteed to be globally enabled, so interrupt::enable
actually doesn't need such a requirement.
Probably it is just worth mentioning in the description of disabling function
Thanks for opening the issue!
For the first issue, whether cpsie
should always be followed by isb
, I don't think we need to - as far as I can see, the critical section's guarantees are still upheld, and the only reason you might want isb
after cpsie
is to ensure a pending interrupt gets to run before any other code after the critical section. For users who need this behaviour they can call isb
directly, but I don't think there are memory-safety implications of not always including it. It also seems like it's not generally needed on any existing Cortex-M implementation. That said, I don't think there would be a significant impact to adding isb
after cpsie
in interrupt::enable()
, so if there was a good reason to do so I think we could justify it.
For the second issue, I think the troublesome scenario is only when:
So the problem is that the interrupt handler disabled interrupts but the critical section then re-enabled them. Is that an issue? The purpose of this check (in interrupt::free
and in the critical-section impl) is to ensure that nesting critical sections works OK (i.e. the innermost CS doesn't re-enable interrupts while still inside an outer CS), and I don't think that's broken in this scenario.
Do we actually need interrupts to always leave interrupt state enabled? It's not like they ever need to remember to leave it disabled, since of course they couldn't have run if it was disabled when they started, and similarly it's not like we could have been inside a critical section when the interrupt fired.
Thanks for clarification! Yeah, probably, both cases (firing pending interrupts immediately and the lack of discarding changes of interrupt enable flag) refer to user's own affairs while the core invariants are already maintained.
As a strongly-ordered instruction (System Control Space interaction), interrupt enabling is guaranteed not to be reordered with other strong-ordered accesses, but that doesn't apply to
normal memory
. Application Note 321 on Memory Barrier Instructions says that for a general ARMv7/6-M device and for a general SCS instruction:For CPSI{D/E} it says that:
The interesting part here is that CPSIE which is executed at the end of the critical section doesn't automatically enforce the barriers:
https://github.com/rust-embedded/cortex-m/blob/1746a63ca16b68514ea23dcca1543aed00165452/src/interrupt.rs#L79C7-L79C7
(compiler fence is present there, but not the barrier instructions)
Moreover, ARM recommends writing barriers to correctly retrieve the data produced by a potential pending interrupt (Example 4 „Interrupt handling code“):
Probably, an abstract rust caller would expect to see a consistent state of memory writes by the pended interrupt because caller's code might rely on it.
Do the considerations above make sense and if they do, should the barriers be added for the «potential fancy future cortex m implementation with superscalar with blackjack and hookers» and should all the perfectly real users suffer from additional instruction(s)?
Additionally, an obvious question arises looking at all the implementations of critical sections: the instruction just before disabling interrupts is MRS (reads special register into a general purpose one). As far as I understand, an interrupt might fire after memorizing previous interrupt state but before the interrupt disabling takes effect. In that case, any changes in the interrupt flag made by the interrupt would be discarded… Are there any special architecture requirements that guarantee absence of such behavior? Or is this behavior just considered acceptable «by definition»?