This "issue" is intended to spawn a discussion on hardware abstractions, safe vs. unsafe, and possibly other methods to allow for improving reliability and robustness of embedded applications.
Some initial reflections on the Rust embedded ecosystem, safe vs. unsafe etc.
cortex-m
Writing of BASE-PRI register is marked unsafe. The register access is atomic. So why unsafe? Well, one explanation is that it can be used (as in RTFM) to implement resource protection, and a write could violate memory safety guaranteed by the protection mechanism.
Writing of PSP register is marked unsafe. The register access is atomic. So why unsafe? Well, one explanation is that a write can indirectly cause erroneous stack accesses.
The list goes on...
Take-away, we currently use unsafe to protect from indirect effects to code execution OUTSIDE of the Rust memory model. Rust has no notion of resource protection by itself.
svd2rust/volatile-register
Register write is atomic, still it is marked unsafe, right?. Why? Notice here, access through generated fields is marked safe. Is there any difference regarding the Rust aliasing rules?
As seen above, the use of unsafe has no direct bearing on the Rust aliasing rules. So why are we using unsafe then? Well to prevent from unintended use. (And one easy way to accomplish that is through an unsafe barrier, but perhaps not the best/only way.)
2) Use the Rust type system. (Similar to type state, with ownership of root access.)
3) Use scoping, to hide dangerous APIs, allowing only "root" crates to access "root" functions. (Not sure how this can be done without 1, but perhaps ...)
3) Deploy post processing, analysing generated code, and rejecting illegal/dangerous accesses.
4) Something else....
With the goal to give guarantees to safe and reliable operation (even without additional HW support, and/or costly SYSCALL APIs), we need figure out how the Rust embedded ecosystem should be designed. I believe we have a unique opportunity to offer a correct by design approach, reaching far beyond what embedded Rust currently offers.
Some Challenges:
Mode changes, wake-up from deep sleep, hard fault recovery etc.
Liveness and robustness, (user injected NMIs, System Reset etc.)
...
Addressing these ambitious goals, while still leveraging on the ongoing efforts, HAL development etc., adds another dimension to the problem. We don't want to end up in a MISRA like situation, where only a crippled subset of Rust (and Rust developments) can be used for developing robust/reliable and safe and sound applications.
Ane approach may be to take a step back, and put embedded Rust in scope of the embedded system as a whole. This shifts the paradigm of seeing the obligations of embedded Rust merely to comprise the Rust code per-se, into the view that the Rust application co-exists with other processes and communicates with these through hardware I/O, special purpose processor registers etc. What are the semantics of embedded Rust in that context, and how can we model such external dependencies? (E.g., a register read, that clears a bit on another memory location, like a peripheral. Can we model bit-banding, etc.)
This "issue" is intended to spawn a discussion on hardware abstractions, safe vs. unsafe, and possibly other methods to allow for improving reliability and robustness of embedded applications.
Some initial reflections on the Rust embedded ecosystem, safe vs. unsafe etc.
cortex-m
Writing of BASE-PRI register is marked unsafe. The register access is atomic. So why unsafe? Well, one explanation is that it can be used (as in RTFM) to implement resource protection, and a write could violate memory safety guaranteed by the protection mechanism.Writing of PSP register is marked unsafe. The register access is atomic. So why unsafe? Well, one explanation is that a write can indirectly cause erroneous stack accesses.
The list goes on...
Take-away, we currently use
unsafe
to protect from indirect effects to code execution OUTSIDE of the Rust memory model. Rust has no notion of resource protection by itself.svd2rust
/volatile-register
Register write is atomic, still it is markedunsafe
, right?. Why? Notice here, access through generated fields is marked safe. Is there any difference regarding the Rust aliasing rules?As seen above, the use of
unsafe
has no direct bearing on the Rust aliasing rules. So why are we using unsafe then? Well to prevent from unintended use. (And one easy way to accomplish that is through anunsafe
barrier, but perhaps not the best/only way.)Alternatives: 1) Propose a primitive extension to Rust, like
root { ... }
to indicate that root access is needed, and have user code maker#![forbid(root_code)]
, possibly extending https://internals.rust-lang.org/t/disabling-unsafe-by-default/7988.2) Use the Rust type system. (Similar to type state, with ownership of root access.)
3) Use scoping, to hide dangerous APIs, allowing only "root" crates to access "root" functions. (Not sure how this can be done without 1, but perhaps ...)
3) Deploy post processing, analysing generated code, and rejecting illegal/dangerous accesses.
4) Something else....
With the goal to give guarantees to safe and reliable operation (even without additional HW support, and/or costly SYSCALL APIs), we need figure out how the Rust embedded ecosystem should be designed. I believe we have a unique opportunity to offer a correct by design approach, reaching far beyond what embedded Rust currently offers.
Some Challenges:
Addressing these ambitious goals, while still leveraging on the ongoing efforts, HAL development etc., adds another dimension to the problem. We don't want to end up in a MISRA like situation, where only a crippled subset of Rust (and Rust developments) can be used for developing robust/reliable and safe and sound applications.
Ane approach may be to take a step back, and put embedded Rust in scope of the embedded system as a whole. This shifts the paradigm of seeing the obligations of embedded Rust merely to comprise the Rust code per-se, into the view that the Rust application co-exists with other processes and communicates with these through hardware I/O, special purpose processor registers etc. What are the semantics of embedded Rust in that context, and how can we model such external dependencies? (E.g., a register read, that clears a bit on another memory location, like a peripheral. Can we model bit-banding, etc.)
What other aspects should we consider?