Closed henrypinkard closed 3 years ago
Or maybe I'm not understanding correctly (might have internal external terminology mixed up)... @dpshepherd could you fill in the details?
That works exactly as described. If I am not mistaken, the ASIdiSPIM uses this strategy. There are plenty of others who do similar things. An external trigger device can autonomously drive cameras and other devices that have been set up to run sequences (stages, and state devices and possibly others also have sequencing capabilities, which more or less means move to the next programmed position upon receiving a trigger). What is missing (but not necessarily essential for many experiments) is a generalized concept how these devices are wired up, and what the master trigger device should be doing.
Hi all,
For the specific use case I talked with @henrypinkard about, here is what we are doing:
Doug
I possibly miss some background about the goals we want to accomplish. What do we want to tell the acquisition engine to make this happen?
One things that is clearly missing is an abstraction of camera triggers. I had a plan with @marktsuchida years ago to add a trigger selection (software, internal, external, bulb, possibly others) to the camera interface. That is more of a Core function that is missing.
The other thing here is that it would be nice if the acquisition engine had some understanding of the trigger devices, and how to make them do what you would like them to do. That needs understanding how the user has wired up stuff around the microscope, as well as an understanding of the delays that various devices experience. I have never had a solid idea what that should look like, and it is likely smart to start with simple, dedicated examples to develop a good feeling for the abstraction that will be needed.
@nicost for some context, this came from an issue asking for support for sequence acquisitions through pycro-manager (https://github.com/micro-manager/pycro-manager/issues/56).
I designed this library based on our discussions last year to replicate the sequencing ability in the current Clojure engine, in which, as I understand it, the camera is the master, and there is a call to core.prepareSequenceAcquisition
followed by one to core.startSequenceAcquisition
. The sum of these together starts a process in which the camera snaps images as fast as possible, and the external TTLs it generates can be used to drive XY or Z stage devices, but these devices must respond fast enough to move during camera readout because there is no explicit support for timing. All correct?
@dpshepherd is talking about an alternative situation, where there is a necessary second call to another device to start synchronization that is occurring externally. One thing I still don't understand about this setup is how the scanning light sheet is synchronized with camera. Is this just based on the fact that you know how long a full 3D scan will take and can also predict how long acquiring N images with a given exposure will take?
It seems to me the solution here for @dpshepherd's case is to just implement an acquisition hook in pycro-manager that signals to the external device one the camera is ready. I think I will need to add in a new option to execute this hook after core.startSequenceAcquisition
has been called. I think this will provide a fairly easy to implement general solution to this type of setup that still allows one to use the other nice things about this acquisition engine and projects that depend on it.
More generally, I think that better support for synchronization would be a great addition, and I also am not sure what the right abstraction is. The camera thing you describe sounds like a good start. It might also be useful to have a simple device type interface in the core, "SynchronizationDevice" that would just implement a startSequence
function, to basically encapsulate the use case of having some external device that handles all synchronization, without having to write the bit of extra code to control it.
For our use, the external device doesn't need to know if the camera is ready or not. We assume that the call from the acquisition engine to start the camera was successful and that the camera is waiting for an external trigger to start acquiring and putting images into the buffer. We don't need anything that the current Clojure engine cannot do, as we are already running the instrument for millions of images using a beanshell script.
re: synchronization. The hardware executes a constant speed stage scan with the scan rate calculated by the frame rate of the camera. The TTL trigger to start the camera, which is in the waiting state, is provided when the scanning stage crosses the pre-specified start point moving at a constant velocity.
Obviously, this situation could be improved with better synchronization, but it works just fine for us right now.
Got it, thanks for the explanation. I think this all makes sense to me now.
To clarify, if I'm looking at the correct script, it seems you're not actually using the Clojure acquisition engine right now. Acquisition engines sit at a higher level of abstraction than the micro-manager core, and they serve a function that is basically equivalent to what your code in that script is doing (with the slight difference in synchronization strategy we've been discussing). So once I add this in, you should be able to discard most of that beanshell script and express the same thing much more succinctly using Python (which will in turn dispatch interacting with the core to AcqEngJ)
Got it. It's never been quite clear to me where each part of MM lives in the code base (I've also never looked that hard), so that's my fault for saying we are using the Clojure engine.
Your summary is correct. Excited to try this out!
But it is fun to think about an acquisition engine that would understand these kind of experimental set-ups. It could be a different "mode" (the Stage-scanning mode triggered by the stage or something), and a few parameters that result in the Engine knowing what commands to send to the hardware and how to interpret (i.e. what metadata to add) the incoming images.
Agreed. It would be a nice addition. Though I think I'd need to see more examples of how people implement this now to understand all the different cases. I also think you might be able to get pretty far with the right device adapter/firmware on a Teensy
We used a Teensy 3.6 in the manner your describe to control the second version of this light sheet. We put the camera into waiting mode & confocal readout mode, then synchronized galvos, DAQs, cameras, and stages using the Teensy while MM waited for images to be returned. That instrument got disassembled when I moved the lab to ASU and we haven't fully brought it back online yet.
Right now, we are tying to use a Triggerscope 3B and behind the scenes Python code code to control a fast structured illumination microscope in MM. We pre-program a digital micromirror device (DMD) with patterns and sequencing order using Python, setup the rest of the experiment using a beanshell script in a similar manner to the stage-scanning light sheet that we have been discussing, then run the entire experiment using the Triggerscope 3B while the camera is acquiring at a set framerate and providing the master clock. This includes controlling lasers, galvos, z piezo, and the DMD. This would be much easier if we did not have to use the camera as master within MM, for a lot of reasons. Having more flexibility in the file storage and metadata would be helpful for these fast multi-dimensional experiments with non-standard axes (here SIM angle and phase) as well.
I think Peter in my group is close to making the code live for the SIM and submitting a preprint.
Couldn't you set the camera in external trigger mode and have the Triggerscope be the master clock in this experiment? Just curious.
I think this is a great example of what an ideal Acquisition Engine should be able to handle. The AE should have some kind of knowledge of the timing and delays of all triggered components, be able to upload a sequence to the main clock (Triggerscope in the is case, and maybe the Triggerscope is the most device to start experimenting with), camera in external trigger mode, camera starts sending images (triggered by the Triggerscope), and the AE applies the correct metadata to each image for storage and display.
Yes, it is possible to set things up a bit differently. I know that Peter explored some different approaches and found a way to use camera as master for his project. I don't know all of the details off the top of my head. For the light sheet setup with the Teensy, your suggestion is exactly what we did.
To be honest, I would caution building around the Triggerscope. We had to make a number of changes to the firmware as supplied, which Peter will also publish when his repo goes public. It's a good idea, but needs more maturation time beyond experiments that trigger a light source and stage.
Trying to figure out the best way to generically support application (like @dpshepherd's light sheet) alongside the current paradigm in the Clojure acquisition engine where the camera is the master and there is no timing.
As I understand it, @dpshepherd's sets the camera to respond to an external trigger, calls
core.startSequenceAcquisition
, then sends out a custom command to a piece of hardware which handles synchronization externally.I think this should be fairly easy to make generic, but that probably depends on how camera device adapters behave. @nicost, do you know if
core.startSequenceAcquisition
usually work with external triggers like this (if the adapter implemented correctly)?