Switch to a much simpler multiprocessing implementation, change keyboard backend, and allow tweaking of the remote process WRT garbage collection/priority.
Rather than shared arrays, we're sending a dictionary (with arbitrary data) via a multiprocessing.Pipe. Notes:
Pipe is unidirectional (only data from remote to local).
We don't lock access to the connections because a) Reduces performance, esp. >1kHz, and b) Not super necessary (no chance of data corruption, read function never gets a chance to finish at high frequencies, i.e. data is always available).
This montage seems to work fine at 2kHz, but above gets a little shaky. Haven't tried a fast "real" device yet.
Dicts get us a few things:
a. Completely arbitrary data shapes, with little effort (no need to keep track of dimensions).
b. Named data elements (e.g. data['time'], data['rel_wheel'])
The read function is probably a bit slower than before -- it'll block until there's nothing left in the Connection. Maybe we'll add a max # of iterations to limit that behavior? Though that'll depend on how often we poll for input on the main process...
Trying pynput as the backend (key release for the previous version didn't seem quite right).
Allow running the remote process as high priority/low niceness (may need root on Unix platforms?), and optional disabling of garbage collection. Neither seems to make a ton of difference, but haven't tested beyond a few minutes.
Switch to a much simpler multiprocessing implementation, change keyboard backend, and allow tweaking of the remote process WRT garbage collection/priority.
multiprocessing.Pipe
. Notes:data['time']
,data['rel_wheel']
)