crect (pronounced correct) is a C++ library for generating a scheduler (at compile time) for Cortex-M series MCUs, which guarantees dead-lock free and data-race free execution. It utilizes the Nested Vector Interrupt Controller (NVIC) in Cortex-M processors to implement a Stack Resource Policy (SRP) based scheduler. Thanks to the compile time creation of the scheduler, the resource requirements at run-time are minimal with:
Initialization:
async
queue.async
is about 400 bytes (the linked list, SysTick and time implementation).Compile time:
Runtime:
lock
.claim
has zero overhead, it decomposes into a lock
.unique_lock
(will change).unique_unlock
(will change).pend
/ clear
.async
.In this scheduler, heavy use of C++ metaprogramming and C++14 allows, among other things, priority ceilings and interrupt masks to be automatically calculated at compile time, while resource locks are handled through RAII and resource access is handled via a monitor pattern. This minimizes user error without the need for an external extra compile step, as is currently being investigated in the RTFM-core language.
YouTube video from embo++ 2018 describing the inner workings of crect: https://www.youtube.com/watch?v=SBij9W9GfBw
In the example
folder a few example projects are setup for the NUCLEO-F411RE board. For example:
crect
primitives.unique_lock
with a data pumping peripheral (for example DMA or a communication interface).It also contains examples of crect_system_config.hpp
and crect_user_config.hpp
, providing references until a documentation is available.
If there are any questions on the usage, throw me a message.
Tested on Ubuntu Linux using GCC 6.3.1 (arm-none-eabi) and a NUCLEO-F411RE for hardware testing. It is currently not working on Cortex-M0 devices.
Boost Software License - Version 1.0
List of contributors in alphabetical order:
crect is based on the following academic papers:
Job:
Resource:
Lock:
Small description on how to use this.
A resource definition is as follows:
using Rled = crect::make_resource<
CRECT_OBJECT_LINK(led_resource) // Link to some object to be protected
>;
Currently 2 system resources exists:
async_queue
is protected via crect::Rasync
.crect::clock::system::now()
is protected
via crect::Rsystem_clock
- see claim for example usage.Any job using these resources need to have the corresponding resource in its resource claim in crect_user_config.hpp
.
A job definition consists of a few parts:
The Job definitions are placed (directly or via include) in crect_user_config.hpp
.
void job1(void);
using J1 = crect::job<
1, // Priority (0 = low)
crect::make_isr<job1, 1>, // ISR connection and location
R1, crect::Rasync // List of possible resource claims
>;
Each job need to be added to the user_job_list< Jobs... >
in crect_user_config.hpp
.
The ISR definitions available are split in the Peripheral ISRs (I >= 0), and System ISRs (I < 0).
// Peripheral ISR definition (I >= 0)
template <crect::details::isr_function_pointer<P, int I>
using make_isr = crect::details::isr<P, crect::details::index<I>>;
// System ISR definition (I < 0)
template <int I>
using make_system_isr = crect::details::isr<nullptr, crect::details::index<I>>;
A lock keeps the system from running a job which will lock the same resource. The analysis to determine which job can take which resource is done at compile-time, which makes the lock very cheap to use as indicated at the start of this document. Lock should however be avoided by the user, use claim wherever possible.
// Lock the resource, remember locks are very cheap -- sprinkle them everywhere!
crect::lock< R1 > lock; // Locks are made in the constructor of the lock
// ...
// Unlock is automatic in the destructor of lock
There is no unlock
, this is by design.
Even with lock, it is easy to leak a resource, and to minimize this chance
claim
uses a Monitor Pattern to guard the resource. Hence the resource is
only available within the lambda of claim
:
// Access the LED resource through the claim following a monitor pattern (just as cheap as a lock)
crect::claim<Rled>([](auto &led){
led.enable();
});
To guarantee the lock for resources with a return works just as good with claim
(for
example when getting the current time, as the system time is a shared resource):
// Resource is handled within the claim, no risk of a data-race.
auto current_time = crect::claim<crect::Rsystem_clock>([](auto &now){
return now(); // now is a function reference
});
For a full example please see ./examples/blinky
.
Add a unique resource and its corresponding lock is to support lock over job boundaries. This allows for producer/consumer patterns as the unique job can work by reading a queue of actions. Ex. a DMA, where jobs push to a "DMA transfer queue", and the job which has the unique DMA resource will read this queue and perform the desired transactions.
A unique_lock
effectively decomposes into disabling the interrupt vector of
the job, while the unique_unlock
re-enables the interrupt vector.
For a full example please see ./examples/unique
.
There is no unlock
, this is by design.
pend
directly sets a job for execution and will be as soon as its priority is the highest, while clear
removes the job for execution.
// Compile time constant pend/clear
crect::pend<JobToPend>();
crect::clear<JobToPend>();
// Runtime dependent pend/clear
crect::pend(JobToPend_ISR_ID);
crect::clear(JobToPend_ISR_ID);
Async defers a jobs execution to some specific time.
// Using chrono to relate the system to time, the current max time is somewhere around 1500 years, depending on MCU :)
using namespace std::chrono_literals;
// Async in some specific duration using chrono
crect::async<JobToPend>(100ms);
// Async in some specific time using a specific time
auto time_to_execute = some_duration + crect::claim<crect::Rsystem_clock>([](auto &now){
return now();
});
crect::async<JobToPend>(time_to_execute);
// Async can be used as pend for runtime dependent execution
crect::async(100ms, JobToPend_ISR_ID);
crect::async(time_to_execute, JobToPend_ISR_ID);