Open cboulay opened 2 years ago
For now I am providing a manual timestamp to every sample. At 30 kHz this adds noticeable overhead.
Note to self: I might be able to manually timestamp the 0'th sample in a chunk only, the rest being tagged to deduce, and then pushthrough=true for the last sample in the chunk. I think this more closely mirrors what push_chunk
actually does.
I encountered a problem that is probably unique to the
outlet_sync
branch but I'm not sure.I'm using
lsl_push_sample_rawtpn
andpushthrough
is only true every 60th sample (nominal srate 30 kHz). For those samples with pushthough=true, I'm providing a manual timestamp (hardware clock + updating&smoothed offset). When I record this stream with LabRecorder and inspect the raw timestamps in the xdf, the first 59 timestamps are from 0.003e-3 (1/30000) to 1.966e-3 (59/30000). The remaining samples are all correct.So it seems like the first 59 samples in the first 'chunk' don't get timestamped properly.
I think the problem is in the
data_receiver
where samples withDEDUCED_TIMESTAMP
get timestamped based on the last timestamp.last_timestamp
gets initialized to 0.0 and thus we use inferred timestamps until we get a real timestamp at the end of the chunk.One thing that's a bit strange is that it's always exactly 59 samples with bad timestamps, which implies that even if the inlet subscribes in the middle of a chunk, the first incomplete chunk is discarded. If that's correct then I wonder if there's a way to grab the real timestamp from the discarded chunk and use that to set
last_timestamp
. This is just speculation though, I haven't found any incomplete-first-chunk discarding mechanism in the code yet.