Open EvanKirshenbaum opened 9 months ago
I tried it, and it seems to work...almost. It occasionally gets into a spot where the TickRequest
goes away for some reason.
Anyway, it occurred to me that rather than specifying an adjustment, what I should do is use reinforcement learning inside the TickRequest
. If I start out with an adjustment of zero (or an initial guess of, say, 10 ms) and some learning rate (in [0,1]), I can compute the error from the last tick and compute the new adjustment as
adjustment = adjustment*(1-learning_rate) + error*learning_rate
which will allow it to drift into the correct adjustment. If I don't want learning, we just set the rate to zero.
This will handle the case in which the tick drift is the reason for the overhead. If the problem is the time between the event being set and it being noticed, we can take that into account as well by having the ClockThread
tell the TimerRequest
when it's picked up the tick. We can use that in computing the error.
In the current code, the system's clock is pulsed by means of a
TickRequest
object held by theClockThread
:which is created in
start_clock()
:When working on #44 (specifically, https://github.com/HPInc/HP-Digital-Microfluidics/issues/44) [comment by @EvanKirshenbaum on Jul 30, 2021 at 12:31 PM PDT], I found that there is some overhead that appears to be pushing the clock interval to be about 10-12ms longer than it should be.
It would probably be a good idea to add in a configurable parameter on the clock that would allow it to be calibrated to be closer to the requested rate by subtracting a specified adjustment from the update interval as the return value from the tick request call.
Migrated from internal repository. Originally created by @EvanKirshenbaum on Jul 30, 2021 at 2:32 PM PDT.