Closed ischoegl closed 4 years ago
Have you loked at Pycro-manager's acquisition functionality? This provides many of the same features as the Java APIs, though the underlying code is separate from that running the micro-manager GUI. This API was designed to be more concise than calling all of the Java APIs.
Alternatively, since it looks like you're basically just trying to run live mode from a script there, you should also take a look at Napari, which would serve as a better matplotlib alternative.
Thanks for the suggestions, @henrypinkard! I haven't really looked at Acquisition
objects, as our application isn't a microscopy application per se (although we do use filter wheels, etc.), so much of the 'standard' workflow does not apply. I hadn't seen Napari yet, and it does look interesting.
Regarding my original question - is there any way to snap an image, and load the data into studio.displays().get_active_data_viewer()
? I'm not very familiar with the code base, but once I have some pointers, I should be able to figure out the rest. I'd be happy to work out an example for the sphinx
docs. Also sorry to ask the question here and not the forum, but your examples page suggested to reach out here.
I am also not familiar with how the studio
APIs work. This might be a better question for the main micro-manager repo. If you can figure out a way to do this with Java code (and byte[]
) arrays, the functionality should map to pycro-manager and numpy arrays.
Though Acquisition
provides convenience features for standard microscopy workflows, it doesn't require the use of any of these and can be used very generally to acquire and display images. It might be worth looking into more because if you run into problems, I can at least advise on solutions.
Thanks for the clarifications @henrypinkard! I’ll probably look into the studio
API some more as there should be a way.
Regarding your comments about Acquisition
: interesting to know that it’s more generic than I had assumed. We are using MM to control an application that involves high dynamic range imaging based on machine vision cameras (sweeps of exposure times) in combination with optical filters (on a thorlabs wheel), and are saving to data structures that mimic (and are compatible to) MM’s hyperstacks. There are no stages involved. We have interacted with MM using the Python interface in the past (MM1.4, which we’re currently porting to MM2.0), and have automated acquisitions using Python scripts (essentially bypassing much of MM’s built-in capabilities). I’ll have another look at the documentation, but if you think that what we’re doing can be handled by pycromanager
natively, please let me know.
Yes, this should all be fairly easily doable through the acquisition interface. You'll want to create your own customized acquisition events, one for each image you record. You can set a different exposure time for each event, and you can specify custom properties to control the filter wheel, or treat this as a channel in MM. You can specify custom Axes
with each event in order to identify the different images in your dataset
@henrypinkard ... sorry to follow up here, but I tried to use Acquisition
and am unable to get it to work. My script is
import pycromanager as pm
import numpy as np
#from pathlib import Path
if __name__ == '__main__':
#cwd = Path.cwd()
exposures = np.linspace(100, 1000, 10)
with pm.Acquisition(directory='.', name='pycrotest') as acq:
events = []
for idx, exposure in enumerate(exposures):
evt = {
'axes': {'exposure': idx},
#'properties' for the manipulation of hardware by specifying an arbitrary
#list of properties
'properties':
[['SaperaGigE', 'Exposure', str(exposure)]]}
events.append(evt)
acq.acquire(events)
which throws the following exception
In [1]: %run pycrotest.py
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
~\GitLab\umanager\examples\pycrotest.py in <module>
6 cwd = Path.cwd()
7 exposures = np.linspace(100, 1000, 10)
----> 8 with pm.Acquisition(directory='.', name='pycrotest') as acq:
9 events = []
10 for idx, exposure in enumerate(exposures):
~\miniconda3\envs\crio\lib\site-packages\pycromanager\acquire.py in __init__(self, directory, name, image_process_fn, pre_hardware_hook_fn, post_hardware_hook_fn, show_display, tile_overlap, magellan_acq_index, process, debug)
190 y_overlap = tile_overlap
191
--> 192 self._remote_acq = acq_factory.create_acquisition(directory, name, show_viewer,
193 tile_overlap is not None, x_overlap, y_overlap)
194
~\miniconda3\envs\crio\lib\site-packages\pycromanager\core.py in <lambda>(instance, signatures_list, *args)
261 params, methods_with_name, method_name_modified = _parse_arg_names(methodSpecs, method_name, convert_camel_case)
262 return_type = methods_with_name[0]['return-type']
--> 263 fn = lambda instance, *args, signatures_list=tuple(methods_with_name): instance._translate_call(signatures_list, args)
264 fn.__name__ = method_name_modified
265 fn.__doc__ = "{}.{}: A dynamically generated Java method.".format(_java_class, method_name_modified)
~\miniconda3\envs\crio\lib\site-packages\pycromanager\core.py in _translate_call(self, method_specs, fn_args)
341 #args that are none are placeholders to allow for polymorphism and not considered part of the spec
342 # fn_args = [a for a in fn_args if a is not None]
--> 343 valid_method_spec = _check_method_args(method_specs, fn_args)
344 #args are good, make call through socket, casting the correct type if needed (e.g. int to float)
345 message = {'command': 'run-method', 'hash-code': self._hash_code, 'name': valid_method_spec['name'],
~\miniconda3\envs\crio\lib\site-packages\pycromanager\core.py in _check_method_args(method_specs, fn_args)
467
468 if valid_method_spec is None:
--> 469 raise Exception('Incorrect arguments. \nExpected {} \nGot {}'.format(
470 ' or '.join([', '.join(method_spec['arguments']) for method_spec in method_specs]),
471 ', '.join([str(type(a)) for a in fn_args]) ))
Exception: Incorrect arguments.
Expected or java.lang.String, java.lang.String, boolean
Got <class 'str'>, <class 'str'>, <class 'bool'>, <class 'bool'>, <class 'int'>, <class 'int'>
I was following your example, so I'm not quite sure why the exception is raised.
Are you running latest nightly build of micro-manager?
I installed MM at the end of May
MM Studio version: 2.0.0-gamma1 20200524
MMCore version 10.1.0
Device API version 69, Module API version 10
I'll upgrade later today and will report back.
@henrypinkard - thank you for the prompt response - I truly appreciate it. I got things to work with the latest nightly install, i.e.
MM Studio version: 2.0.0-gamma1 20200812
MMCore version 10.1.0
Device API version 69, Module API version 10
However, looking at the tif file that is generated from the script above, it's only one image (with the first exposure set correctly - in this case100 ms), although I was trying to create 10 events/images with varying exposures. I had assumed that axes correspond to axes of a HyperStack? I am probably overlooking something obvious?
Set 'exposure'
directly in the event. see here
Thank you for the suggestion. After setting exposure
directly, I am now getting 2 frames (first two exposures), I.e. still less than the anticipated 10 frames.
In case this is of any relevance, I'm on Windows 10 and am running Python from a conda environment (the Python version is 3.8.3)
So you get only two frames when adding the exposure, but if you delete the 'exposure'
field from the event, you get all 10 (but with a constant exposure)? Is that right?
No. I switched over to MM's democam, and removed almost everything:
import pycromanager as pm
import numpy as np
if __name__ == '__main__':
exposures = np.linspace(100, 1000, 10)
with pm.Acquisition(directory='.', name='pycrotest') as acq:
events = []
for idx, exposure in enumerate(exposures):
evt = {'axes': {'exposure': idx}}
events.append(evt)
print(len(events))
acq.acquire(events)
I get 10 events, but only a single frame is generated; renaming to a different channel does not change the outcome.
Thanks for doing that. This was a small bug unrelated to exposure that is now fixed. It will be available in the nightly builds in a few days after https://github.com/micro-manager/micro-manager/pull/904 merges. In the mean time, you can workaround around by adding 'time': 0
into your acquisition events:
{'axes': {'time': 0, 'exposure': idx}}
It's at least fixed on the demo camera now. Maybe there is still something remaining for your actual system
@henrypinkard ... thank you for the prompt response! I can confirm that adding time
fixes things for democam, but it does not work for my actual system (I am getting only 2 frames).
Hmm maybe this a problem with your camera device adapter then. Try running core.set_exposure
with the actual exposure values produced by the linspace
. Also you can look at the corelogs (in the main microamanger install directory) for anything suspicious
I get the same behavior for the GigE Camera Adapter distributed with MM as for our internal adapters (the latter are necessary as the MM GigE adapter doesn't produce images for our camera due to timeout issues). There aren't any errors on the core logs for the MM distributed adapter; in all cases, only 2 frames are generated.
PS: The core.set_exposure
method works without issues (I assume this is separate from Acquisition
, i.e. it involves a Bridge
)
The acquisition code is internally calling that method, along with core.snap_image
and core.get_tagged_image(0)
. Maybe try just calling these three in sequence together and see if the difference between demo camera and your camera arises
hm - I'm not sure what context you're referring to - the core
methods work, but how would I wrap them in the Acquisition
context?
No acquisition involved. Does this work differently with the two cameras
for exposure in np.linspace(100, 1000, 10):
core.set_exposure(e)
core.snap_image()
image = core.get_tagged_image(0)
Thanks for the clarification - this is what I thought you had in mind, but I wasn't 100% sure. And yes, this works as expected (replacing e
by exposure
).
In that case I'm not sure what to suggest next, aside from remote controlling your system to run from source and take a look. Would this be possible? You can email me to talk set up: hbp [at] berkeley [dot] edu
Got it - thanks. FYI, the device in question is a Teledyne DALSA Genie Nano GigE machine vision camera.
Turns out it was a bug in the acquisition engine that only comes up with certain cameras. Now fixed and will be in nightly builds after this PR merges (https://github.com/micro-manager/micro-manager/pull/906)
I am currently looking into
pycro-manager
and was wondering whether it is possible to automate image acquisition while showing the images using micro-manager's own preview windows (MM is already open, which would make this an alternative to matplotlib)The low level example works well, and I have started to look into
MMStudio
via Micro-manager Java APIs. While I have locatedDisplayManager
,DataManager
,Image
,DataStore
,DisplayController
etc., I have thus far been unable to open a new (or even interact with an existing) preview window.I realize that
pycro-manager
is fairly new and some interfaces may not be complete, and it may not be possible to instantiate some objects from Python? There aren't many examples to go from right now so any pointers would be appreciated! Thanks!PS: Here's the extent of what I've located