labscript-suite / labscript-devices

A modular and extensible plugin architecture to control experiment hardware using the 𝘭𝘒𝘣𝘴𝘀𝘳π˜ͺ𝘱𝘡 𝘴𝘢π˜ͺ𝘡𝘦.
http://labscriptsuite.org
Other
5 stars 58 forks source link

AI timing skew during long shots #80

Closed dihm closed 1 year ago

dihm commented 3 years ago

Given how the NI-DAQmx driver currently works, all of the outputs (and generally other hardware) are hardware-timed via direct outputs from the parent pseudoclocks. This is not true for the analog inputs of the DAQs, which are timed via the internal reference oscillator of the DAQ. Synchronization between the two is handled at the end by correlating start times and slicing the AI traces at the appropriate times. This works fine if the reference clocks for the pseudoclock and the DAQ don't drift relative to each other, but that is generally not the case for a longer shot (on the order of 1 second) since the standard clocks for a pulseblaster and a DAQ both have accuracy on the order of 50 ppm. A description of this problem can be found here.

In thinking about it, I wonder if there may not be a fairly simple solution that could be implemented easily within current labscript: namely providing common (or at least phase stable) external timebase references to the DAQ and the pseudoclock. If the ultimate timebase for the pseudoclock (which hardware times the outputs) and the DAQs (which time the AIs) is from a common source, these long term drifts should drop out (leaving the typical skew from different cable paths lengths and the like).

DAQs have this ability, but naturally there are three different ways to do it: master timebase synchronization, reference timebase synchronization, and sample timebase synchronization. So before I spend time digging in to how to properly introspect which method is supported for any device and actually updating the NI-DAQmx device, does anyone have any experience doing this in an actual experiment? Did it work OK?

In any case, this particular issue is a pretty subtle one that can come as a nasty surprise. It would be nice to have at least a documented method that overcomes it, if not a functional labscript configuration that isn't too hard to implement when necessary.

dihm commented 3 years ago

The reason this isn't a more common complaint (I think) is because waits effectively re-synch the two clocks. Each wait starts with a hardware timed output pulse from the DAQ that is measured by the same DAQ. That AI-referenced time is used in determining when to slice up AI-traces for subsequent times, which effectively resets the drift between the two clocks. So if you put a wait after a long delay, but right before taking the AI data you care about, everything works as expected.