fair-acc / gr-digitizers

GNU General Public License v3.0
3 stars 3 forks source link

Synchronization of sink data streams #77

Open alexxcons opened 2 years ago

alexxcons commented 2 years ago

Currently some sink-instances (randomly) receive samples faster than others, making it hard to send data-packages containing data from multiple sinks.

To solve that, one could buffer the data, wait until all sinks received the same minimum number of samples, and send the new data up to the sample which is available on all sinks.

While that works fine for a single data-source, it gets more tricky if sinks of multiple data-sources should be synchronized.

The problem here is, that the sinks (in our use-case Digitizer blocks) dont start simultaniously. (We are talking about up to several seconds delay here worst-case) So on the sinks, there will always be a discrepance in the number of samples acquired. E.g. for Sink1, connected to Source1 sample #1234 will be stamped with 12:00:00am where for Sink2, connected to Source2 sample #1234 might be stamped with 12:00:03am

Cabad tries to solve this problem by synchronizition via timestamp, not via sample-count. Though the library is still WIP and does not work reliably yet for multiple sources.

Another approach could be to synchronize the startup of the signal-sources .. .e.g. drop all samples, until all signal-sources got started. Than some synchronization trigger is applied to stop the dropping. Like that, sample-counting could be used again at the sink-level. Would that be desirable? Disadvantages ?

Since this most likely is a general gnuradio-problem, we should best report it on https://github.com/gnuradio/gnuradio/issues to aim for a general solution. (Together with a simple reproducer)

RalphSteinhagen commented 2 years ago

We briefly discussed this also with @marcusmueller (one of the GNU Radio maintainers):

The issue is that we have independent sinks and that the individual preceding blocks (by design) perform the processing asynchronously which is more performant in terms of throughput. There would be several solutions: a) one large 'monster' sink (IMHO impractical since we have typically > 30 signals) b) synchronise the sinks using, for a example, a mutex whenever a trigger arrives (this would probably severely affect performance), or c) the IMO preferred solution: to rely on the circular buffers in each sink and to use the time-stamps to busy-loop until all circular buffers have the requiried t_0-\delta t samples once the external trigger t_0 (N.B. number of samples should be identical).

We should make sure that the buffers remain as look-free as possible, otherwise, the synchronisation may cause a severe performance penalty for the whole flow graph.