Closed tlambert03 closed 5 months ago
Attention: Patch coverage is 19.25134%
with 755 lines
in your changes missing coverage. Please review.
Project coverage is 83.94%. Comparing base (
7e1512f
) to head (f7cddf5
). Report is 1 commits behind head on main.:exclamation: Current head f7cddf5 differs from pull request most recent head b26e0bf
Please upload reports for the commit b26e0bf to get more accurate results.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@wl-stepp, when you have a moment, it would be great if you could test this out. I apologize for redoing much of this, a I started to integrate it in a couple projects i found various things I wanted to abstract a little differently and ended up getting carried away (but it was very helpful having your viewer to begin with). The main thing (i think?) that isn't yet re-implemented here is spreading grid images across the canvas, but I'll add that back in shortly. Here are a few bullet points explaining what's different here:
StackViewer
object works now for any nDimensional array, numpy, xarray, etc... i find myself wanting this sort of thing often (and don't want to use napari) and this was very close to the general purpose tool.MDAViewer
is the subclass that contains any/all concerns about pymmcorevalue
(a mapping of dimension name to index) is the source of truth for the current index. The StackViewer
itself mostly just acts as an intermediate between the DimsSlider and the canvas: dims changed -> get index value -> slide the data -> send to the canvasisel(data, inde)
method. This method A) knows how to index into various datastores, whether they be a pymmcore-plus data handler, numpy array, dask array, xarray, etc... B) is asynchronous to begin with (returns a Future) which may be important in the future if it takes a while to slice data...I'm not sure what code you have that has begun to use experimental.StackViewer
... so I can't tell yet whether you'll be able to mostly drop this in. but have a look at examples/mda_viewer.py
here and let me know if you're ok with these changes
(make sure to run pip install -U superqt[iconify,cmap]
)
Looks exciting!
I have tried some things with the demo config, but will try to find some time to test on the scopes next week. Here some observations:
mmcore = CMMCorePlus.instance()
import sys
from qtpy import QtWidgets
from queue import Queue
q = Queue() # create the queue STOP = object() # any object can serve as the sentinel q_iterator = iter(q.get, STOP) # create the queue-backed iterable
app = QtWidgets.QApplication(sys.argv) canvas = MDAViewer() mmcore.mda.events.frameReady.connect(canvas._data.frameReady) mmcore.mda.events.sequenceStarted.connect(canvas._data.sequenceStarted) mmcore.mda.events.sequenceFinished.connect(canvas._data.sequenceFinished) canvas.show() mmcore.run_mda(q_iterator)
from useq import MDAEvent import time q.put(MDAEvent(index={'t': 0, 'c': 0}, exposure=20)) time.sleep(1) q.put(MDAEvent(index={'t': 1, 'c': 0}, exposure=20))
q.put(STOP) sys.exit(app.exec_())
Shows the frames, but does not change with the slider. Does not add the second channel in the luts.
A couple of smaller things:
- When in mono, clims should be saved by channel and maybe also the slider range
- When in mono changing channel should change label on slider in luts panel
- config names should be channel names/configs (especially nice for EDAs when not all channels are always there)
- Looks like only one channel is updated in composite mode
In the mda_viewer example because of the "Color Test Pattern" it seems like only the first channel changes with the slider. Maybe a lower frequency for the artifical waves instead?
Our users like a lot, that luts are persistent with the channel in v1. The question is where this behavior should come in, and might be also relevant for many other widgets. But maybe LutControl could get something like set_from_settings?
Overall very nice to see all the abstractions, should make it nice to adapt.
thanks @wl-stepp for taking a look so quickly! very much appreciate it.
I was not immediately able to get event-driven acquisitions to work.
I believe you would need to modify that slightly, as in the examples/mda_viewer.py
:
canvas = MDAViewer()
canvas.show()
mmcore.run_mda(q_iterator, output=canvas.data)
passing the datastore held by the canvas into the run_mda()
method is a bit more explicit (and doesn't require the additional connections). It also means you could share the same datastore easily for other purposes
datastore = ...
canvas = MDAViewer(datastore)
canvas.show()
mmcore.run_mda(q_iterator, output=datastore)
- When in mono, clims should be saved by channel and maybe also the slider range
- When in mono changing channel should change label on slider in luts panel
- config names should be channel names/configs (especially nice for EDAs when not all channels are always there)
💯 ... I liked all of these things about your version too. Will port them over.
Looks like only one channel is updated in composite mode
yeah, I only recently changed that from "Noise" to color test pattern, because Noise was slow in some cases. but you're absolutely right, it looks very static. I like your suggestion much better.
regarding the Queue application, I just tried it out, and this does show images:
import sys
from queue import Queue
from pymmcore_plus import CMMCorePlus
from qtpy import QtWidgets
from useq import MDAEvent
from pymmcore_widgets._stack_viewer_v2._mda_viewer import MDAViewer
app = QtWidgets.QApplication(sys.argv)
mmcore = CMMCorePlus.instance()
mmcore.loadSystemConfiguration()
canvas = MDAViewer()
canvas.show()
q = Queue()
mmcore.run_mda(iter(q.get, None), output=canvas.data)
q.put(MDAEvent(index={"t": 0, "c": 0}, exposure=10))
q.put(MDAEvent(index={"t": 1, "c": 0}, exposure=10))
q.put(None)
app.exec()
however, there is a problem still with iter(Queue())
in that they don't result in an MDASequence
with known sizes
(i.e. there's no way to know ahead of time what the dimensions are going to look like). So you do see a burst of images, but then indexing through them doesn't work because the underlying datastore itself doesn't work. This means they don't currently work well with the 5Dwriter bases (like OMEZarrWriter) which do currently require some degree of regularity in the dimensions.
I do think we could/should make a data handler that makes zero assumptions about the data it is going to see, and simply stores a sequence of 2D camera planes associated with an index or full MDAEvent. I think this viewer would likely work with that type of thing fine.
however, there is a problem still with
iter(Queue())
in that they don't result in anMDASequence
with knownsizes
(i.e. there's no way to know ahead of time what the dimensions are going to look like). So you do see a burst of images, but then indexing through them doesn't work because the underlying datastore itself doesn't work. This means they don't currently work well with the 5Dwriter bases (like OMEZarrWriter) which do currently require some degree of regularity in the dimensions.I do think we could/should make a data handler that makes zero assumptions about the data it is going to see, and simply stores a sequence of 2D camera planes associated with an index or full MDAEvent. I think this viewer would likely work with that type of thing fine.
Yes, that was my conclusion as well. I think that makes it another issue. I want to change the current EDA implementation that is running on the microscope to use the queue, so I will also think about this.
See https://github.com/pymmcore-plus/pymmcore-plus/pull/348
Which this pr is actually now using, and it's working with queues as well (see example mda_viewer_queue)
closing in favor of #299, which branched off of this
new stack viewer, inspired by @wl-stepp's awesome stack viewer, but generalized and abstracted a bit more (works for any array type, such as numpy, xarray), offers various color modes (grayscale, and composite), hopefully has a bit more robust indexing and event system, and abstracts out the vispy part, also providing a pygfx backend (and working on a pure qt backend as well)
needs some changes in superqt https://github.com/pyapp-kit/superqt/pull/242
depends on https://github.com/pymmcore-plus/pymmcore-plus/pull/348