Closed Prof-Florian-Kuenzner closed 2 years ago
Hi Florian. Thanks for taking the time to write a detailed issue.
With respect, I don't think you're the target audience for this library in its default settings. I'm trying to keep the defaults so that a new user can get a sensible outcome, and so that an advanced user can modify what they need to suit their specific application.
When I started seeing people coming in from Ralph's tutorial, and then in discussion he mentioned that he assumed that delay()
was redefined to vTaskDelay()
by the library as it is done with the ESP32 implementation, I went around the Internet and found all the YouTube tutorials were using delay()
incorrectly to blink LEDs.
I could stand on a mountain and shout at everyone "You're all doing it wrong. You aren't really using FreeRTOS at all, but rather a single priority round-robin scheduler". Or, I could just fix it for the vast majority, by defining delay()
to the sensible default.
I'm very interested in the application in your video, but I can't see any code repository to learn how you're doing it.
If you're relying on millisecond accuracy (I believe you are), then it would actually be better to use the Arduino millis()
function (or with even tighter 4 microsecond granularity, the micros()
function) for your application. This would reduce the overhead imposed by Arduino delay()
function and would probably get you greater accuracy by omitting its unnecessary decision loops and long
comparisons which in 8-bit assembly do take time.
Alternatively, you could just comment out my definition of delay()
, and continue without further change.
I will add some notes in the readme about using delay()
and millis()
for timing, to make it clear.
I can feel your pain, seeing all these wrong usages of delay()
. I mean, this is where it is the difference between fast hacking something, what is really lovely possible within the Arduino framework, and working with real-time systems, where one has to go deeper into the details.
The video with the ball drop example is part of my lecture Embedded Real-Time Systems, and the sources are not publicly available right now. The students have to write the code within one day, to get the example working. We practice real-time programming methods and discuss different behaviours, as well as software and hardware limitations of the system.
Yes, in my example, I rely on milliseconds accuracy, which is impossible with the FreeRTOS SysTick on the Arduino Mega, because of the watchdog usage, what gives me a 15ms SysTick that can easily have a jitter of additionally a few ms. In this specific code section, only one task is active, and therefore, it is possible to perform a busy wait with enough accuracy. For sure, we could use a hardware timer, but I wanted to demonstrate different possibilities.
For the calculations, we already rely on the millis()
function, which is driven by Timer0
behind the scenes. As a workaround, I tested:
delay()
function#undef delay
after importing the headersBoth works as a workaround.
Personally, I don't think it's a good idea to change a function from a different library. But if you think this is the best way, then I would add some preprocessor defines where it is easily possible to choose the default busy wait behaviour from the Arduino Framework or the vTaskDelay()
driven function from FreeRTOS. This makes it more clear, that it is intended to choose. Undefining or commenting out something feels like "you hack it", and not to configure it. If you like, I can help to implement and test this.
I mean, this is where it is the difference between fast hacking something, what is really lovely possible within the Arduino framework, and working with real-time systems, where one has to go deeper into the details.
Yes, you've captured exactly the reason why this was written. To enable fast prototyping of a complex system, using existing Arduino Shields and drivers. It started off as preparation for a Hackathon.
I think my preferred strategy would be to call Arduino delay(ms)
if the ms
parameter is shorter than a Tick, and otherwise call vTaskDelay()
if it is longer. Does that meet your requirement?
I've been trying to think of how to do this with the preprocessor, but I'm coming up with nothing. So, I'll have a go at a mydelay(ms)
function that will do the comparison and then elect the correct delay type.
extern void mydelay( const uint32_t ms );
#define delay mydelay
void mydelay( const uint32_t ms )
{
if ( ms < portTICK_PERIOD_MS )
delay( ms );
else
vTaskDelay( (TickType_t) (ms) / portTICK_PERIOD_MS );
}
I should also point out that the code using the short delay()
should be wrapped in a critical section, so that it is not interrupted by the Scheduler.
Edit Wrong. See below.
Either using the avr-libc methods in <util/atomic.h>
or using the portENTER_CRITICAL
and portEXIT_CRITICAL
methods in portable.h
. Otherwise the ball will be occasionally dropped.
Hint for any students reading this 😉.
My first idea was to provide something like this:
#define DELAY_BEHAVIOUR_DEFAULT_ARDUINO (1)
#define DELAY_BEHAVIOUR_DEFAULT_FREERTOS (2)
//here, the user can choice which implementation he wants
//#define configDELAY_BEHAVIOUR DELAY_BEHAVIOUR_DEFAULT_ARDUINO
//#define configDELAY_BEHAVIOUR DELAY_BEHAVIOUR_DEFAULT_FREERTOS
#ifndef configDELAY_BEHAVIOUR
#define configDELAY_BEHAVIOUR DELAY_BEHAVIOUR_DEFAULT_FREERTOS
#endif
#if configDELAY_BEHAVIOUR == DELAY_BEHAVIOUR_DEFAULT_FREERTOS
#define delay(ms) vTaskDelay((TickType_t)(ms)/portTICK_PERIOD_MS)
#endif
Here, it is clear that the intention is to choose, which implementation should be used. What do you think about that?
In your example with the mydelay()
there is another problem: for small time periods (which depends on portTICK_PERIOD_MS
) there is a busy wait, while for >= portTICK_PERIOD_MS
there is the vTaskDelay()
which can introduce a task change), which is---from a real-time programming view---not very reliable, and should be avoided.
Thanks for the hint with portENTER_CRITICAL
/portEXIT_CRITICAL
.
My first idea was to provide something like this:
#define DELAY_BEHAVIOUR_DEFAULT_ARDUINO (1) #define DELAY_BEHAVIOUR_DEFAULT_FREERTOS (2)
Yes. I tried that #define
method but another issue came up with platform.io. I think (but unsure) that it is to do with vTaskDelay()
being undeclared when it is defined, so it defaults to void vTaskDelay(uint32_t)
, the same as Arduino delay()
, while TickType_t
is uint16_t
. Adding something like this would help, I guess. But not tested.
#if configDELAY_BEHAVIOUR == DELAY_BEHAVIOUR_DEFAULT_FREERTOS
extern void vTaskDelay( const TickType_t xTicksToDelay );
#define delay(ms) vTaskDelay((TickType_t)(ms)/portTICK_PERIOD_MS)
#endif
But I'm not totally happy with above outcome. It doesn't handle the case where both delay types are needed.
When playing with the Blink_AnalogRead
sketch there are two different delay lengths. Very short (say less than 1 Tick) and hundreds of milliseconds long. Using the preprocessor solution doesn't resolve the problem. Using the current -4
Release does fix the problem. Provided that the very short delay AnalogRead Task is run at a lower priority, then the higher priority Blink Task works properly. And, as expected, if the priorities are inverted it fails because the short delay Task is busy waiting and the lower priority Task is never scheduled to run.
The only problem I see with the current solution is that the delay period calculation (16-bit division) can be done by the preprocessor when ms
is a constant, but when using this function solution delay period must be live calculated from the ms
parameter. 16-bit divides are fairly slow on 8-bit machines, but I guess since we are talking about the longer FreeRTOS delay it makes little actual difference.
So, I'm not sure what is best. Unless it is a problem, I'm tempted to leave it as is for a while. See if there's some more feedback.
Thanks for the hint with
portENTER_CRITICAL
/portEXIT_CRITICAL
.
Actually, on further thought this idea to prevent the Scheduler doing an interrupt during delay()
is not going to be successful.
The issue is that delay()
relies on the increase in micros()
to complete, and with the interrupts globally disabled there can never be a change in timer0_overflow_count
and therefore delay()
will never complete.
The only solution is to wdt_disable()
disable the WDT from triggering rather than a general interrupt disable, and then restore WDT afterwards.
Sorry, I have some busy days. I respond next week when I have time again.
@feilipu
I could stand on a mountain and shout at everyone "You're all doing it wrong. You aren't really using FreeRTOS at all, but rather a single priority round-robin scheduler". Or, I could just fix it for the vast majority, by defining delay() to the sensible default.
Well, IMHO you should indeed shout when anyone uses delay()
with the Arduino_FreeRTOS_Library. So why not redefine delay()
to a compile time error? Or is that too much of a good thing?
Just my 2.3 ¢ (inflation, you know…), Flössie
Hi all!
I think isn't a good idea to redefine delay()
to vTaskDelay()
There are moments where it is necessary to use de default delay()
behavior for example in an academic environment where I'm currently. I would like to teach to students the difference between using delay()
and vTaskDelay()
.
If some profesional dude needs to use correctly FreeRTOS he will use the vTaskDelay aproach anyways.
Greetings from the Caribe
@Floessie
I could stand on a mountain and shout at everyone "You're all doing it wrong. You aren't really using FreeRTOS at all, but rather a single priority round-robin scheduler". Or, I could just fix it for the vast majority, by defining delay() to the sensible default.
Well, IMHO you should indeed shout when anyone uses
delay()
with the Arduino_FreeRTOS_Library. So why not redefinedelay()
to a compile time error? Or is that too much of a good thing?Just my 2.3 ¢ (inflation, you know…), Flössie
You are right, usually, one wants to use vTaskDelay()
. But here, on the Arduino Mega, the SysTick (that is in this case realised with the Watchdog-timeout) isn't fast enough: This has the drawback, that the real-time requirements can't be satisfied.
Performing a busy wait is -- when you know what you are doing -- in this case appropriate because with that it is possible to react fast enough and fulfil the real-time requirements.
@leoNavarro95
Hi all! I think isn't a good idea to redefine
delay()
tovTaskDelay()
There are moments where it is necessary to use de defaultdelay()
behavior for example in an academic environment where I'm currently. I would like to teach to students the difference between usingdelay()
andvTaskDelay()
.If some profesional dude needs to use correctly FreeRTOS he will use the vTaskDelay aproach anyways.
Greetings from the Caribe
I also want to discuss the difference between busy wait (delay()
) and waiting with a possible task change (vTaskDelay()
) with my students. I do also recommend a solution where one can choose the behaviour.
@feilipu
Thanks for the hint with
portENTER_CRITICAL
/portEXIT_CRITICAL
.Actually, on further thought this idea to prevent the Scheduler doing an interrupt during
delay()
is not going to be successful.The issue is that
delay()
relies on the increase inmicros()
to complete, and with the interrupts globally disabled there can never be a change intimer0_overflow_count
and thereforedelay()
will never complete.The only solution is to
wdt_disable()
disable the WDT from triggering rather than a general interrupt disable, and then restore WDT afterwards.
In my case, the SysTick will not perform a task change because of the task priority design. The task that performs the busy wait with delay()
has the highest priority and will therefore not be interrupted by any other task. That's why this works in my case.
Hi all! I think isn't a good idea to redefine
delay()
tovTaskDelay()
There are moments where it is necessary to use de defaultdelay()
behavior for example in an academic environment where I'm currently. I would like to teach to students the difference between usingdelay()
andvTaskDelay()
. If some profesional dude needs to use correctly FreeRTOS he will use the vTaskDelay aproach anyways. Greetings from the CaribeI also want to discuss the difference between busy wait (
delay()
) and waiting with a possible task change (vTaskDelay()
) with my students. I do also recommend a solution where one can choose the behaviour.
Yep, I managed to do that, and also learned about the Tick Interrupt and the Time Slice Algorithm. The thing is that I can't achieve to demostrate to my students about the "busy wait" (i tried with a crude for() loop) and the scheduler preemps the excecution of that "busy wait" this is because of the Time Slice Algorithm with the watch dog timer interrupt that avoid to hapen that in a task. In resume: it is not possible to do a "busy wait" that ocupe all the processing time within a task (while mentioned algorithm been activated wich I recommend to not deactivate) EDITED: I refer to a system where all task have same priority
@leoNavarro95
... EDITED: I refer to a system where all task have same priority
Yes, if you have enabled configUSE_TIME_SLICING than this is the job of the scheduler to switch tasks after the time slice is over. This is also known as round robin scheduling.
@feilipu can you introduce something like this:
#define DELAY_BEHAVIOUR_DEFAULT_ARDUINO (1)
#define DELAY_BEHAVIOUR_VPORTDELAY (2)
//here, the user can choice which implementation he wants
//#define configDELAY_BEHAVIOUR DELAY_BEHAVIOUR_DEFAULT_ARDUINO
//#define configDELAY_BEHAVIOUR DELAY_BEHAVIOUR_VPORTDELAY
#ifndef configDELAY_BEHAVIOUR
#define configDELAY_BEHAVIOUR DELAY_BEHAVIOUR_VPORTDELAY
#endif
#if configDELAY_BEHAVIOUR == DELAY_BEHAVIOUR_VPORTDELAY
#ifndef delay
#define delay vPortDelay
#endif
#endif
This would made the things more explicit and gives us the possibility to choose which behaviour we want. What do you think about this? Another good name for the macro could be: configUSE_VPORT_DELAY
.
As an alternative to delay()
, I just found _delay_ms()
and _delay_us()
as stated in the avr-libc. @leoNavarro95 this could also be an alternative to teach the busy wait behavior for you.
//here, the user can choice which implementation he wants
//#define configDELAY_BEHAVIOUR DELAY_BEHAVIOUR_DEFAULT_ARDUINO
//#define configDELAY_BEHAVIOUR DELAY_BEHAVIOUR_VPORTDELAY
Ok, I've done a simple solution, that I think still achieves the required outcomes.
@Prof-Florian-Kuenzner
The only solution is to wdt_disable() disable the WDT from triggering rather than a general interrupt disable, and then restore WDT afterwards.
In my case, the SysTick will not perform a task change because of the task priority design. The task that performs the busy wait with delay() has the highest priority and will therefore not be interrupted by any other task. That's why this works in my case.
Actually, I don't think that's going to help in this case. The WDT is still firing, and the interrupt is still being serviced. So if you measure jitter in your timing function, I'm sure you'll see some events where the WDT interrupted, and was handled by vPortYieldFromTick()
, but didn't initiate a task swap. Of course it will be only a few microseconds of jitter, so this may not matter in your application.
@feilipu
Ok, I've done a simple solution, that I think still achieves the required outcomes.
This is exactly what I need. Thanks a lot. I'm very impressed with your fast reaction time and your friendliness to your users.
Describe the issue Since Release 10.4.6-3, the Arduino
delay()
function doesn't do a busy wait anymore. This broke the real-time behaviour on my Arduino Mega because it now usesvTaskDelay()
under the hood, and on the Arduino Mega, there is no real SysTick (here, the watchdog timeout with a resolution of 15 ms is used), what doesn't allow fast task changes during the FreeRTOS scheduler. Furthermore, because of the crazy oscillator, it is also not very accurate (the watchdog oscillator frequency depends (see page 392 on ATmega2560 datasheet on the voltage and the temperature). But in combination with ISRs and the direct call oftaskYIELD()
if required, this works well (here, if you are interested in what we do: https://www.youtube.com/watch?v=loU-Y_Ek8XI).Expected behavior The
delay()
function comes from the Arduino framework and not from FreeRTOS. Therefore, it shouldn't be redefined. Or, when necessary, there should be an option where it is possible to decide whether to use the new behaviour or to use the well known "busy wait" behaviour.Additional context I read already https://github.com/feilipu/Arduino_FreeRTOS_Library/issues/113 and https://github.com/RalphBacon/244-RTOS-for-Arduino/issues/1, but I think redefining well-understood functions from a non-sleep blocking busy wait to a sleep blocking behaviour which breaks the real-time behaviour---where I don't expect changes---should be done very carefully.
Maybe we can discuss about that?