micro-manager / mmCoreAndDevices

Micro-Manager's device control layer, written in C++
39 stars 101 forks source link

Add more explanation of circular buffer #171

Open ianhi opened 2 years ago

ianhi commented 2 years ago

I just saw @marktsuchida's comment in #168 and am extremely curious.

Also, before anybody gets confused, the current "circular buffer" is not a circular buffer or ring buffer. It is actually a bounded queue. Except in Live mode (where overflow is allowed), where it is completely weird.

From https://github.com/micro-manager/mmCoreAndDevices/blob/4edb2c970793fffcbb3c9fa906b710d8feb694d0/MMCore/MMCore.cpp#L2972-L2976

I was treating it as such. I'm particular curious about this for any implications it may have for a good live mode. e.g. we used to trying to display everything in the buffer until we figured out to just getLastImage: https://github.com/tlambert03/napari-micromanager/pull/40

marktsuchida commented 2 years ago

The only real reference is what MMStudio does, but here is an overview for a single camera. It is also good to observe the behavior in Live vs MDA using the Sequence Buffer Monitor plugin of MMStudio.

You might think that, in Live mode, it would be simpler if the Core just kept the latest image, never allowing the buffer to fill up. You'd be right in the single camera case. But we can't make that isolated change due to how Multi Camera works.

In addition to Multi Camera, we are also constrained by the desire to keep the Live Replay plugin, which is supposed to allow the user to view what just happened. It simply pops all the images left in the sequence buffer after an immediately preceding Live.

We could probably move the clear-on-overflow behavior into the Core and change it so that it only removes the oldest frame to make space. This should be compatible with existing camera adapters (which will just never see an overflow), with Multi Camera, and with Live Replay (and partially improve Live Replay behavior).

So what about multiple cameras? The Multi Camera device adapter simply forwards StartSequenceAcquisition() calls to each of the physical cameras. The physical cameras send their images via the normal mechanism, shown above for DemoCamera. So images from different cameras end up in the same sequence buffer, with no guarantee of ordering (in fact, there is no requirement that the two cameras be acquiring at exactly the same frame rate, at least in the case of Live mode).

Hopefully this illustrates the sorts of issues we face if we try to make backward-compatible improvements to the API. This is why in #168 I proposed that we should add a completely new interface and provide backward compatibility (if at all) through emulation (many years ago, I even started writing some code in that direction, although my approach at the time was probably too ambitious, especially with pre-modern C++). In addition to everything above, we are also constrained by the MMDevice interface (where backward compatibility is even more crucial), but I think that interface has fewer problems in this particular area.

ianhi commented 2 years ago

Thanks mark! This is wonderfully detailed and very helpful. One follow up question. I noticed the MM livemanager that the seqeuenceacquisition timing is set to 0:

https://github.com/micro-manager/micro-manager/blob/4a5d51ea76f89eaa6e74d0d772822701661aa76a/mmstudio/src/main/java/org/micromanager/internal/SnapLiveManager.java#L256

Any insight and why it's that way rather than the exposure value?

We've been setting it to be the current value of the exposure:

https://github.com/tlambert03/napari-micromanager/blob/157ec63eebb3d370d89f3fecef9025cd33e6333d/micromanager_gui/main_window.py#L222

and then stopping and restaritng the acquistion if the exposure changes: https://github.com/tlambert03/napari-micromanager/blob/157ec63eebb3d370d89f3fecef9025cd33e6333d/micromanager_gui/main_window.py#L318-L321

is that approach worse somehow? Or is perhaps precluded by the multi-camera support in micromanager?

marktsuchida commented 2 years ago

The startContinuousSequenceAcquisition() (which is what Live actually uses, despite what I wrote above) is like startSequenceAcauisition() except that numImages is set to infinity and stopOnOverflow is forced to false.

(Unfortunately this is left to individual device adapters, but the correct thing for them to do is what DemoCamera does: https://github.com/micro-manager/mmCoreAndDevices/blob/4edb2c970793fffcbb3c9fa906b710d8feb694d0/DeviceAdapters/DemoCamera/DemoCamera.cpp#L1048-L1051 (whether LONG_MAX is correct here depends on the camera).)

With all variants of start*SequenceAcquisition(), the intervalMs parameter is ignored (please don't shoot the messenger :). To be more precise, it is ignored by all (I hope, for consistency's sake) camera device adapters. I don't know the early history of this, but I suspect nobody bothered to actually implement the behavior in early camera adapters, and therefore MMStudio never bothered to pass anything other than 0, and therefore nobody has bothered to implement it in camera adapters. (Some cameras that support setting a frame rate independent of exposure do so through a custom property. This makes more sense anyway, other than the lack of standardization.) At this point, we should probably rename it unused, ignore what the app passes, and always pass 0 to device adapters.

As for changing the exposure during a live acquisition, yes, it is correct to restart the acquisition. You have to stop the acquisition, set the exposure, then start a new acquisition, because many cameras are incapable of changing the exposure while an acquisition is running. There has been talk about allowing the change to be applied without restarting for cameras that opt in.