Closed JakubVanek closed 4 years ago
The - 50UL
correction fixes a delay that is caused by the initial expiration period when the timer is reconfigured.
I have split the large commit into multiple smaller ones. The last two commits reformat the whole file but they add no changes otherwise.
I have added a new feature - specialized locks. I think this will be useful to prevent data races in periodic handlers when they manipulate shared data that aren't trivial (e.g. handler for buttons). More info is in the commit message.
This pull request will have to wait a bit, till I've reviewed the older pull requests.
Hmmm, it turns out that there is a race condition / concurrent data access in SetTimerCallback()
. However, it seems that it could be solved with POSIX semaphores, as they also provide sem_trywait
.
Hmmm, it turns out that there is a race condition / concurrent data access in SetTimerCallback(). However, it seems that it could be solved with POSIX semaphores, as they also provide sem_trywait.
Can you pinpoint where it happens?
I have a particular edge case in mind:
SetTimerCallback()
function. It has to lock the mutex; otherwise the signal handler could see inconsistent data.SetTimerCallback()
; but I don't understand the situation clearly).SetTimerCallback()
function and it has to acquire the mutex and not deadlock itself (i.e. the mutex has to support some sort of recursion; to implement this, a lock around the mutex itself would have to be implemented (not sure though)).I think that masking the signal in the lock function together with tracking lock count (I think this is needed for the case when a timer callback calls SetTimerCallback()
) could solve multiple problems. It could also make skipping the signal handler code redundant - it would never happen that a signal arrives in the middle of a critical section; rather the signal would get paused until the section is over.
I was thinking about an alternative solution that would use one dispatcher thread for all signals Alternatively, timer_create()
also has SIGEV_THREAD
, which could do the job. The advantage that I see in using threads is that one could use real mutexes and avoid spinning on a variable. However I also think that this could be an overkill.
Hmmm, though blocking signals in multithreaded application could be an issue - sigprocmask()
is unspecified there and pthread_sigmask()
blocks the signal only for one thread. Another thread could still receive the signal and potentially see inconsistent data (if not guarded by a try-lock).
It boils down to if we want to always be thread-safe or not. I think that it would be nice and it could save someone time when debugging some obscure issue, though I'm not sure it is worth it.
I have hacked together a preliminary support (no docs, no locking, no cleanup) for doing this with threads and I don't regret doing so. I have reached to using timerfd_*()
and epoll()
and it feels like the right thing to do :D
In particular, this allows one to elegantly drop the timer loop counter and the modulos and it is possible to let the kernel do its magic.
I think the code is ready now. I haven't tested it on the brick yet, however I hope it will work.
New test run:
#include <API/ev3.h>
#include <stdlib.h>
#include <stdio.h>
void call100(int arg) {
fprintf(stdout, " 100 ms called @ %lu ms\n", TimerMS(0));
fflush(stdout);
}
void call250(int arg) {
fprintf(stdout, " 250 ms called @ %lu ms\n", TimerMS(0));
fflush(stdout);
}
void call1000(int arg) {
fprintf(stdout, "1000 ms called @ %lu ms\n", TimerMS(0));
fflush(stdout);
}
int main() {
ClearTimerMS(0);
SetTimerCallback(ti100ms, call100);
SetTimerCallback(ti1sec, call1000);
SetTimerCallback(ti250ms, call250);
Wait(3000);
RemoveTimerCallback(ti100ms, call100);
Wait(1000);
RemoveTimerCallback(ti250ms, call250);
Wait(1000);
RemoveTimerCallback(ti1sec, call1000);
Wait(1000);
}
Result:
100 ms called @ 100 ms
100 ms called @ 200 ms
250 ms called @ 250 ms
100 ms called @ 300 ms
100 ms called @ 400 ms
250 ms called @ 499 ms
100 ms called @ 500 ms
100 ms called @ 600 ms
100 ms called @ 700 ms
250 ms called @ 749 ms
100 ms called @ 800 ms
100 ms called @ 900 ms
250 ms called @ 999 ms
100 ms called @ 1000 ms
1000 ms called @ 1000 ms
100 ms called @ 1100 ms
100 ms called @ 1200 ms
250 ms called @ 1249 ms
100 ms called @ 1300 ms
100 ms called @ 1400 ms
250 ms called @ 1499 ms
100 ms called @ 1500 ms
100 ms called @ 1600 ms
100 ms called @ 1700 ms
250 ms called @ 1749 ms
100 ms called @ 1800 ms
100 ms called @ 1900 ms
250 ms called @ 1999 ms
100 ms called @ 2000 ms
1000 ms called @ 2000 ms
100 ms called @ 2100 ms
100 ms called @ 2200 ms
250 ms called @ 2249 ms
100 ms called @ 2300 ms
100 ms called @ 2400 ms
250 ms called @ 2499 ms
100 ms called @ 2500 ms
100 ms called @ 2600 ms
100 ms called @ 2700 ms
250 ms called @ 2749 ms
100 ms called @ 2800 ms
100 ms called @ 2900 ms
250 ms called @ 2999 ms
1000 ms called @ 3000 ms
250 ms called @ 3249 ms
250 ms called @ 3499 ms
250 ms called @ 3749 ms
250 ms called @ 3999 ms
1000 ms called @ 4000 ms
1000 ms called @ 5000 ms
Ready for merge? If so, please rebase on top of the array PR.
Ready for merge? If so, please rebase on top of the array PR.
Ah it seems, it's already. Well, given that you just did an update yesterday, just let me know if it's ok to merge now.
Now it should be ready for final review and merging.
Test for the
SetTimerCallback()
machinery: