pymmcore-plus / pymmcore-plus

Pure python/C++ micro-manager ecosystem
https://pymmcore-plus.github.io/pymmcore-plus/
BSD 3-Clause "New" or "Revised" License
30 stars 11 forks source link

Feature: consider a modular SnapEngine interface #283

Open tlambert03 opened 11 months ago

tlambert03 commented 11 months ago

What do you guys think about a SnapEngine? We have to trigger the camera and some components for every snap. This means that snapImage does not return unless we check somewhere that it has been called and do the triggering. If we would have a SnapEngine that get's called whenever we want to snap, we could put that logic in there and have it automatically for everywhere you might want to snap (SnapButton, StageWidget etc). One step further, the MDAEngine would also call the SnapEngine, so we can do setup for one event in the SnapEngine and sequence setups in the MDAEngine. the SnapEninge has a default that works as it is now and a different SnapEngine can be registered like the MDAEngine is.

Originally posted by @wl-stepp in https://github.com/pymmcore-plus/pymmcore-plus/issues/144#issuecomment-1792250295

tlambert03 commented 11 months ago

from https://github.com/pymmcore-plus/pymmcore-plus/issues/144#issuecomment-1792354051

Definitely open to it! I do think that the moment of acquisition is one of the most critical parts, and honestly, the part that someone might not want to use micro-manager for (for example to use acquire or to run the camera as a slave rather than master). So, having a nice abstraction around a modular frame grabber would be great. I'm going to move this to a new issue since I think the imageSnapped emission is a parallel discussion.

tlambert03 commented 11 months ago

pinging @dpshepherd here as well. I think he might have relatively complicated "snap logic" as well (controlling daq or waveforms that must happen in synchrony).

If any of you have thoughts on what you'd need this interface to provide, pain-points that you'd like solved, or even a pseudo-code interface that you'd like to see, I'd love to hear it. I do think this is a high-value topic and one that potentially leverages the strengths of doing this all in a pure python env

wl-stepp commented 11 months ago

Here is my stab at a custom MDAEngine for our iSIM: iSIM MDAEngine

For a SnapEngine, I think this part would be relevant:

    def setup_event(self, event: MDAEvent):
        try:
            next_event = next(self.internal_event_iterator) or None
        except StopIteration:
            next_event = None
        self.ni_data = self.device_group.get_data(event, next_event)
        thread = Thread(target=self.snap_and_get, args=(event,))
        self.snap_lock.acquire()
        thread.start()

    def exec_event(self, event: MDAEvent):
        self.snap_lock.acquire()
        time.sleep(self.pre_trigger_delay/1000)
        self.task.write(self.ni_data)
        return ()

    def snap_and_get(self, event):
        self.snap_lock.release()
        self._mmc.snapImage()
        self._mmc.mda.events.frameReady.emit(self._mmc.getImage(fix=False), event,
                                             self._mmc.getTags())
        self.snap_lock.release()

If I had a custom SnapEngine, I would move this logic there. Then if snapImage() is called on the CMMCorePlus, it would call these two functions. The MDAEngine would just forward the calls and give additional functionality to set up for sequences. I think for our iSIM we might be able to even use the default MDAEngine then. If so, it would be even easier to combine it with a EDA powered custom MDAEngine.

dpshepherd commented 11 months ago

Hi all-

We have two solutions for this problem right now. We created both before the current MDAEngine concept was as fully implemented. Both setup an NI DAQ and perform different types of acquisitions depending on requests from the user.

Solution 1 is here and is hacky senior person just get it done (aka me) code. On this microscope, the camera is the master clock for the hardware sequencing.

Solution 2 is here as written by @ptbrown1729 and is much cleaner. This is a custom implementation of an "MDAEngine" (not generic) for a complicated DAQ driven instrument with multiple cameras and modalities. On this microscope, the DAQ is the master clock for the hardware sequencing.

Peter and I were talking earlier this week that we need to look closely at the MDAEngine and think about rewriting our code to better utilize all of the great work from the pymmcore-plus team

wl-stepp commented 11 months ago

Solution 2 is here as written by @ptbrown1729 and is much cleaner. This is a custom implementation of an "MDAEngine" (not generic) for a complicated DAQ driven instrument with multiple cameras and modalities. On this microscope, the DAQ is the master clock for the hardware sequencing.

This is more or less the way we do it with the software running on the iSIM atm. Define the DAQ data/task for the full sequence and start on run acquisition. Triggering each MDAEvent seems so much easier and my first tests didn't show excessive jitter on the frame arrival times/timing that I was afraid of a little. So pretty hopeful for this approach.

dpshepherd commented 11 months ago

This is more or less the way we do it with the software running on the iSIM atm. Define the DAQ data/task for the full sequence and start on run acquisition. Triggering each MDAEvent seems so much easier and my first tests didn't show excessive jitter on the frame arrival times/timing that I was afraid of a little. So pretty hopeful for this approach.

For us, we need to be fully hardware sequenced because we observe timing jitter if sending software triggers. That is mainly due to running at very high speeds in one of the modes (~10k fps). We program the DAQ with all the triggers we need for one "image" or "image stack" and then loop that on the DAQ, using delay timers on the DAQ (not software) as required for non-continuous timelapse or stage moves. The z-piezo is voltage controlled on that setup, but not the XY stage.

Edit to add: For the instrument controlled in "solution 1", if we are not running in continuous mode, there we do use a software trigger to start each "image". There we can easily switch to triggering individual MDAEvents on the requested timeplan for longer non-continuous timelapses.