Closed tstenner closed 3 years ago
I think separating samples_seen
and samples_since_t0
will make the estimation more robust with dropped samples, but that'll take some further tinkering.
Edit - replacing with better plots and updated gist...
Updated: https://gist.github.com/cboulay/006bab4dc971cbd9c7b49e7d32c9cd30
This image was generated when using a conda environment containing a lsl.dll built on the previous version:
And this one when using the new lsl.dll
Don't worry about the constant offset of 0.6 msec - it's just because I'm using t[0] as my offset, even though t[0] itself is subject to postprocessing with no history so is a poor estimate.
So, in terms of fixing my problem, it looks good.
One more time but only looking at the pre-flush stage to keep in scale for proper comparison of postprocessing functionality before and after the change.
Before change:
After change:
In addition to the above, I read through the code and unit tests, and I'm satisfied.
PR for #117.
set_options()
is calledThere's one major changes: the number of encountered samples is only tracked while
proc_dejitter
is active. Previously, samples were counted and enabling dejittering afterwards re-used this sample counter and therefore assumed a larger number of timestamps had been used to update the dejittering parameters.