c4ev3 / EV3-API

EV3-API for Programming the LEGO Mindstorms EV3 in C
http://c4ev3.github.io
GNU General Public License v2.0
71 stars 22 forks source link

Rewrite timers module #35

Closed JakubVanek closed 4 years ago

JakubVanek commented 4 years ago

Test for the SetTimerCallback() machinery:

#include <ev3.h>
#include <stdlib.h>
#include <stdio.h>

void call100(int arg) {
    fprintf(stdout, " 100 ms called @ %lu ms\n", TimerMS(0) - 50UL);
    fflush(stdout);
}

void call250(int arg) {
    fprintf(stdout, " 250 ms called @ %lu ms\n", TimerMS(0) - 50UL);
    fflush(stdout);
}

void call1000(int arg) {
    fprintf(stdout, "1000 ms called @ %lu ms\n", TimerMS(0) - 50UL);
    fflush(stdout);
}

int main() {
    ClearTimerMS(0);
    SetTimerCallback(ti100ms, call100);
    SetTimerCallback(ti1sec,  call1000);
    SetTimerCallback(ti250ms, call250);
    Wait(3000);
}
 100 ms called @ 0 ms
 250 ms called @ 1 ms
1000 ms called @ 1 ms
 100 ms called @ 100 ms
 100 ms called @ 200 ms
 250 ms called @ 250 ms
 100 ms called @ 300 ms
 100 ms called @ 400 ms
 100 ms called @ 500 ms
 250 ms called @ 500 ms
 100 ms called @ 600 ms
 100 ms called @ 700 ms
 250 ms called @ 750 ms
 100 ms called @ 800 ms
 100 ms called @ 900 ms
 100 ms called @ 1000 ms
 250 ms called @ 1000 ms
1000 ms called @ 1000 ms
 100 ms called @ 1100 ms
 100 ms called @ 1200 ms
 250 ms called @ 1250 ms
 100 ms called @ 1300 ms
 100 ms called @ 1400 ms
 100 ms called @ 1500 ms
 250 ms called @ 1500 ms
 100 ms called @ 1600 ms
 100 ms called @ 1700 ms
 250 ms called @ 1750 ms
 100 ms called @ 1800 ms
 100 ms called @ 1900 ms
 100 ms called @ 2000 ms
 250 ms called @ 2000 ms
1000 ms called @ 2000 ms
 100 ms called @ 2100 ms
 100 ms called @ 2200 ms
 250 ms called @ 2250 ms
 100 ms called @ 2300 ms
 100 ms called @ 2400 ms
 100 ms called @ 2500 ms
 250 ms called @ 2500 ms
 100 ms called @ 2600 ms
 100 ms called @ 2700 ms
 250 ms called @ 2750 ms
 100 ms called @ 2800 ms
 100 ms called @ 2900 ms
JakubVanek commented 4 years ago

The - 50UL correction fixes a delay that is caused by the initial expiration period when the timer is reconfigured.

JakubVanek commented 4 years ago

I have split the large commit into multiple smaller ones. The last two commits reformat the whole file but they add no changes otherwise.

JakubVanek commented 4 years ago

I have added a new feature - specialized locks. I think this will be useful to prevent data races in periodic handlers when they manipulate shared data that aren't trivial (e.g. handler for buttons). More info is in the commit message.

a3f commented 4 years ago

This pull request will have to wait a bit, till I've reviewed the older pull requests.

JakubVanek commented 4 years ago

Hmmm, it turns out that there is a race condition / concurrent data access in SetTimerCallback(). However, it seems that it could be solved with POSIX semaphores, as they also provide sem_trywait.

a3f commented 4 years ago

Hmmm, it turns out that there is a race condition / concurrent data access in SetTimerCallback(). However, it seems that it could be solved with POSIX semaphores, as they also provide sem_trywait.

Can you pinpoint where it happens?

JakubVanek commented 4 years ago

I have a particular edge case in mind:

I think that masking the signal in the lock function together with tracking lock count (I think this is needed for the case when a timer callback calls SetTimerCallback()) could solve multiple problems. It could also make skipping the signal handler code redundant - it would never happen that a signal arrives in the middle of a critical section; rather the signal would get paused until the section is over.

I was thinking about an alternative solution that would use one dispatcher thread for all signals Alternatively, timer_create() also has SIGEV_THREAD, which could do the job. The advantage that I see in using threads is that one could use real mutexes and avoid spinning on a variable. However I also think that this could be an overkill.

JakubVanek commented 4 years ago

Hmmm, though blocking signals in multithreaded application could be an issue - sigprocmask() is unspecified there and pthread_sigmask() blocks the signal only for one thread. Another thread could still receive the signal and potentially see inconsistent data (if not guarded by a try-lock).

It boils down to if we want to always be thread-safe or not. I think that it would be nice and it could save someone time when debugging some obscure issue, though I'm not sure it is worth it.

JakubVanek commented 4 years ago

I have hacked together a preliminary support (no docs, no locking, no cleanup) for doing this with threads and I don't regret doing so. I have reached to using timerfd_*() and epoll() and it feels like the right thing to do :D

In particular, this allows one to elegantly drop the timer loop counter and the modulos and it is possible to let the kernel do its magic.

JakubVanek commented 4 years ago

I think the code is ready now. I haven't tested it on the brick yet, however I hope it will work.

JakubVanek commented 4 years ago

New test run:

#include <API/ev3.h>
#include <stdlib.h>
#include <stdio.h>

void call100(int arg) {
    fprintf(stdout, " 100 ms called @ %lu ms\n", TimerMS(0));
    fflush(stdout);
}

void call250(int arg) {
    fprintf(stdout, " 250 ms called @ %lu ms\n", TimerMS(0));
    fflush(stdout);
}

void call1000(int arg) {
    fprintf(stdout, "1000 ms called @ %lu ms\n", TimerMS(0));
    fflush(stdout);
}

int main() {
    ClearTimerMS(0);
    SetTimerCallback(ti100ms, call100);
    SetTimerCallback(ti1sec,  call1000);
    SetTimerCallback(ti250ms, call250);
    Wait(3000);
    RemoveTimerCallback(ti100ms, call100);
    Wait(1000);
    RemoveTimerCallback(ti250ms, call250);
    Wait(1000);
    RemoveTimerCallback(ti1sec, call1000);
    Wait(1000);
}

Result:

 100 ms called @ 100 ms
 100 ms called @ 200 ms
 250 ms called @ 250 ms
 100 ms called @ 300 ms
 100 ms called @ 400 ms
 250 ms called @ 499 ms
 100 ms called @ 500 ms
 100 ms called @ 600 ms
 100 ms called @ 700 ms
 250 ms called @ 749 ms
 100 ms called @ 800 ms
 100 ms called @ 900 ms
 250 ms called @ 999 ms
 100 ms called @ 1000 ms
1000 ms called @ 1000 ms
 100 ms called @ 1100 ms
 100 ms called @ 1200 ms
 250 ms called @ 1249 ms
 100 ms called @ 1300 ms
 100 ms called @ 1400 ms
 250 ms called @ 1499 ms
 100 ms called @ 1500 ms
 100 ms called @ 1600 ms
 100 ms called @ 1700 ms
 250 ms called @ 1749 ms
 100 ms called @ 1800 ms
 100 ms called @ 1900 ms
 250 ms called @ 1999 ms
 100 ms called @ 2000 ms
1000 ms called @ 2000 ms
 100 ms called @ 2100 ms
 100 ms called @ 2200 ms
 250 ms called @ 2249 ms
 100 ms called @ 2300 ms
 100 ms called @ 2400 ms
 250 ms called @ 2499 ms
 100 ms called @ 2500 ms
 100 ms called @ 2600 ms
 100 ms called @ 2700 ms
 250 ms called @ 2749 ms
 100 ms called @ 2800 ms
 100 ms called @ 2900 ms
 250 ms called @ 2999 ms
1000 ms called @ 3000 ms
 250 ms called @ 3249 ms
 250 ms called @ 3499 ms
 250 ms called @ 3749 ms
 250 ms called @ 3999 ms
1000 ms called @ 4000 ms
1000 ms called @ 5000 ms
a3f commented 4 years ago

Ready for merge? If so, please rebase on top of the array PR.

a3f commented 4 years ago

Ready for merge? If so, please rebase on top of the array PR.

Ah it seems, it's already. Well, given that you just did an update yesterday, just let me know if it's ok to merge now.

JakubVanek commented 4 years ago

Now it should be ready for final review and merging.