Closed joefowler closed 1 year ago
The RunClientUpdater
is probably a model for this. It's a long-running goroutine that runs a ZMQ server (in that case, a PUBLISHER). How to get requests into the core data processing loops is still unclear to me.
We decided instead to add a new RPC command to that server. Details TBD, but might involve either putting the raw data into the REPLY or storing raw data in a (local, temporary) file whose name is sent in the REPLY field.
Fixed by #323.
ZMQ, especially the new implementation (#281), seems unable to keep up with the data requirements when we turn on auto trigger and send the data to a "setup" program running
easy_client.py
. Those programs might include:Although we could consider taking a step back to the earlier ZMQ library, which had higher success rates, it doesn't change the fact that the ZMQ PUB-SUB pattern never guarantees message delivery.
Consider instead adding a new feature to DASTARD, a server which can be asked to send N consecutive samples from all channels. This server would need to be a new goroutine, presumably operating a new ZMQ server socket (a REPLY socket in the REQ-REP pattern). When it gets a request, it could make a copy of the relevant data into a slice of the right size (or a slice of slices). When the data are fully acquired, it could be sent back as a REPLY to the client's REQUEST.
Alternatives include writing the reply to a (temporary?) file or to a named pipe (a Unix FIFO). Advantages of ZMQ would be: