Closed ranma closed 1 month ago
@ranma
it is true. Better not use this lib in a muti-threaded contexts or at least you should not modify tasks/scheduler state outside of the loop()
calls.
I've tried to adopt this lib to use atomic locks (via std::mutex
) but then gave up the idea, it requires to much code refactoring to keep compatibility for all platforms. Would be easier to use RTOS threads in such cases and leave this lib for single context scenarios.
@ranma
You are right. This is not a true thread-safe approach. TS is a general purpose single-threaded cooperative library that is supposed to be running in a single thread. When used with preemptive frameworks like FreeRTOS - great care needs to be used to avoid calling TS methods from different threads. iMutex
is a quick and dirty way to avoid at least some trivial conflict situations when a task control methods (like enable()
) are called from the thread other than the thread running the actual scheduler.
I always run TS under a separate and dedicated FreeRTOS Task, making sure that all inter-thread commands are passed via a Queue to the TS Thread. With this approach you dedicate FreeRTOS Tasks to independent activities that are very much independent: for instance:
works great for me so far.
@ranma @vortigont
I just pushed v4.0.0 with a major change in approach for thread safety into a separate branch - still testing.
Basically the idea is:
Developers should be calling Scheduler and Task methods directly only from the thread where TaskScheduler execute()
method runs.
Calls from the other threads should be done via Scheduler::requestAction(...) methods:
https://github.com/arkhipenko/TaskScheduler/blob/f404e9702885af43cb534d50fe42f199fde5b6de/src/TaskScheduler.h#L1242
Target platform should implement an action queue and two methods:
bool _task_enqueue_request(_task_request_t* req);
// puts _task_request_t object on the action queue
bool _task_dequeue_request(_task_request_t* req);
// retrieves _task_request_t object from the action queue
This way all internal variables are modified in a single thread - thus eliminating the need for mutex-based locking.
snippets for esp32:
#define TS_QUEUE_LEN 16
QueueHandle_t tsQueue;
uint8_t tsQueueData[TS_QUEUE_LEN * sizeof(_task_request_t)];
StaticQueue_t tsQueueBuffer;
...
tsQueue = xQueueCreateStatic (
TS_QUEUE_LEN,
sizeof(_task_request_t),
tsQueueData,
&tsQueueBuffer );
...
bool _task_enqueue_request(_task_request_t* req) {
if ( xPortInIsrContext() ) {
BaseType_t xHigherPriorityTaskWokenByPost = pdFALSE;
BaseType_t rc = xQueueSendFromISR(tsQueue, req, &xHigherPriorityTaskWokenByPost);
if( xHigherPriorityTaskWokenByPost ) portYIELD_FROM_ISR();
return rc;
}
else {
return xQueueSend(tsQueue, req, 0);
}
}
bool _task_dequeue_request(_task_request_t* req) {
if ( xPortInIsrContext() ) {
BaseType_t xHigherPriorityTaskWokenByPost = pdFALSE;
BaseType_t rc = xQueueReceiveFromISR(tsQueue, req, &xHigherPriorityTaskWokenByPost);
if( xHigherPriorityTaskWokenByPost ) portYIELD_FROM_ISR();
return rc;
}
else {
return xQueueReceive(tsQueue, req, 0);
}
}
Looks interesting!
Would not it be easier to implement a derived Scheduler
classes for specific platforms with embedded queues?
This __attribute__((weak))
overrides and #ifdefs
makes it so tough to craft proper feature set.
@vortigont
That's an idea. Quite honestly Scheduler was never intended to be derived from, but shy not?
The only two platforms I really see the need to support would be FreeRTOS (that should cover esp32 and stm32) and Zephyr (for nRF chips) - anything else?
I guess I can make requestAction()
methods virtual for further overrides.
The iMutex uint8_t is modified without using atomics or taking a lock, so it can very much end up in a bad state if accessed from multiple CPU cores. (Maybe also still possibly problematic with multiple preemptively scheduled RTOS threads, in particular when an IRQ triggers a thread change, e.g. using taskYIELD_FROM_ISR in FreeRTOS)