Closed WhoIsJack closed 12 months ago
Hi Jonas, thanks a lot for posting this issue!
It's very useful to know there seems to be a problem when installing using the conda yml files. Will have a look at this, thanks! Great you managed to install the dependencies yourself.
Concerning your fusion run, first of all the configuration file looks good to me. The error you see indicates that during processing, an intermediate file is read in as a generic array rather than an 'ImageArray' (which is a numpy array additionally equipped with some attributes such as origin and spacing).
I currently don't have access to my MVRegFus testbench until next week. However, I just went through the code and have a suggestion for you to try in the meantime. Namely, you could try by modifying a line in mvregfus/mv_graph.py: Uncomment line 31 and use the commented line 30 instead. It could be that saving intermediate images as zarr files is creating a problem in this configuration, and falling back to an ".imagear.h5" extension could prevent it. I'm hoping this could already help, otherwise I'll have a better look early next week.
Thanks for the swift and helpful response, Marvin!
I have applied the hotfix you suggested and - lo and behold - it runs!
Or at least it ran further than before. I then experienced another error during fusion, see below. After changing from fusion_method = 'LR'
to fusion_method = 'weighted_average'
, it ran all the way through with no errors! 🎉
I gotta say you've done a great job with this thing! Installation issues and stability aside, and notwithstanding the somewhat arcane config file, this is a great piece of software! Loosely compared to Fiji's Multiview Reconstruction plugin, which I'm also currently trying out, your MVRegFus...
Anyway, I think more people should know about this piece of software; multi-view registration and fusion still seems to be a major bottleneck for ordinary users (esp. on the ZEISS Z1, where the built-in processing options are pretty much non-functional).
Anyway, would be cool to try the deconvolution. Here's the error I got:
Great that we're on to the next step! And that the full processing pipeline ran through at least for the (non-deconvolution) additive fusion.
I'm glad to hear that so far the code seems useful to you. Indeed there are some stability and installation issues. Regarding the configuration file I also agree that it's suboptimal and not entirely straight forward to get used to. But as you say, the idea is that after setting some basic parameters the processing is relatively automatised, at least for a "classical" Z1/Z7 multi-view dataset. In the near future I'm planning to write a napari plugin that builds upon this code (including some refactoring, clean-up and testing), hoping to make its functionality more accessible.
Yes, as you mention the fusion doesn't require beads to be present in the sample. For registration, making use of the .czi metadata plus performing a sequential registration works well. And for the reconstruction, a simple gaussian-shaped psf is used. There's a default value for its extension along the xy and z dimensions (LR_sigma_z=4
and LR_sigma_xy = 0.5
in the configuration file) which one could vary, and generally this seems to work relatively well. One could argue that precisely measuring the PSF for thick multi-view specimens has only limited utility, as 1) this is typically done outside of the specimen and the psf deteriorates quickly within the sample and 2) the psf also varies along the light-sheet. Preibisch et al. also mention in their publication that they don't see precise psf estimation to play an important role in obtaining sharper reconstructions.
Regarding the error you're experiencing: Unfortunately it's not clear to me what is happening (also it could be that traceback is obfuscated by dask). The problem obviously seems to be occurring within cupy. That could mean there's a bug in the code that calls cupy wrongly, or it could mean there's a problem downstream of the way cupy is used in the code. So until I can test the code with a fresh install on a windows machine, a suggestion could be the following: Uninstall cupy from the mvregfus environment and try again. There'd be no GPU acceleration, but there's a chance it runs through.
A quick note regarding the 'blending' vs 'dct' settings for fusion_weights
: 'blending' creates additive weights for each view that simply take care of smoothly blending views into each other at their boundaries. 'dct' also performs blending, but in addition to that weighs each view according to its relative image quality. In my experience this improves final reconstruction quality (in some cases substantially). However on the flip side using 'dct' also increases processing time significantly.
Thanks for the additional info! Definitely agree that a napari plugin would be a great way of making this tool more accessible. 👍
Regarding the error, running it without cupy does indeed work! It does take very long indeed, though, and whilst the output looks pretty good, I'd like to try a few optimizations, which is gonna be rough without the GPU speed gains, so hoping that you might be able to figure out a solution.
On that note, I should also mention that I've had further conda problems when uninstalling cupy, so maybe this error is related to my installation issues earlier on? I'm using a fresh install of the latest version of miniconda on this machine, which seems to have stability issues. Maybe I should try reinstalling everything with an older version of conda?
Side note: When running without cupy, the message CuPy NOT available, using multiple threads for fusion
is printed many many times. Probably not intended behavior. 🙃
Hi Marvin, and a belated Happy New Year!
So I've given this a fresh start, hoping to get it to work with cupy, as otherwise it's just too slow. Unfortunately, I ran into the same error again (though context is a bit different). Giving you a detailed report here in case that helps with potentially resolving this!
Hi Jonas, a belated Happy New Year to you too!
Thanks for the detailed info regarding the installation, run and error traceback. Great that you found a way to install everything in multiple conda calls, and the documentation of it is super useful.
I booked a session on our Windows workstation for tomorrow evening. I'll troubleshoot running the fusion using cupy and post the result here.
Heyho, so I could reproduce the cupy error you were getting. It's fixed by passing cupy a numpy array instead of an instance of mvregfus.image_array.ImageArray
in the deconvolution function (which is a numpy subclass). I cannot remember that error occurring before but well, this fixes it.
I pushed the fix to the main branch (together with the change we discussed above that switches a line in mv_graph.py
), so I'd suggest you to try again.
Curious to know whether it runs through! And if so, whether you're getting a performance increase by using cupy. Speaking of performance, using "blending" weights instead of "dct" weights should gain you some time also. Btw, if you have bokeh>2.4.2<3 installed, you can observe the progress in more detail looking at the dask dashboard in your browser at "https://localhost:8787".
Nice! I've tried it and the error is indeed gone!
Unfortunately, it subsequently runs into a cupy.cuda.memory.OutOfMemoryError
(full output below).
According to the Task Manager, the GPU's Dedicated GPU memory usage
gradually fills up, and the error occurs soon after it's completely full. Note that Shared GPU memory usage
remains almost empty (I have no experience with GPUs, so not sure if that's relevant).
I think the problem here is likely that the GPU on this machine just isn't good enough. It only has 8GB of dedicated memory (96GB of shared). If the dedicated memory is what matters, I'm not surprised there is a problem, since my input data is ~33GB in total (3 channels, 3 angles; so ~3.7GB per z-stack).
To be honest, I had just assumed that the GPU would be good enough, since this is a dedicated light sheet processing PC, but 8GB does not sound like much and some quick googling of the model (NVIDIA Quadro P4000
) shows that it's like 6 years old...
I think we have some meatier GPUs available on our cluster. I'll see if I can run it on there one of these days. Will get back to you!
Thanks again for your help in sorting this stuff out! 👍
Ok, good that the error is gone 👌.
In this case we can limit how many threads are trying to access the GPU at the same time. At the top of mv_graph.py
, try uncommenting the second part of the line creating the dask cluster, limiting it to a single thread as such:
client = Client(processes=False, threads_per_worker=1)
It works! 🎉
I could even increase it to threads_per_worker=3
to make use of most of the GPU's memory without overloading it! It now takes ~1h to get everything done (with deconvolution and dct enabled), down from >30h without cupy. Nice! 🚀
Now I should be able to optimize parameters a bit to improve the quality (it doesn't look quite as good as the ImageJ result yet). I reckon lowering mv_final_spacing
might be a good start? I'll try some things next week and let you know how it goes.
Might still try it on the cluster at one point, just to see how much faster it could be on a good GPU.
Thanks a bunch for the support! 🥇
Great :)
Now I should be able to optimize parameters a bit to improve the quality (it doesn't look quite as good as the ImageJ result yet). I reckon lowering mv_final_spacing might be a good start? I'll try some things next week and let you know how it goes.
Yes, with mv_final_spacing
you can lower the pixel spacing of the output. Also, make sure to set raw_input_binning
to [1,1,1]
or at least [2,2,1]
.
Solved after discussion in #10 .
Hi Marvin,
I wanted to try this out for some test data I've recently collected, but ran into some trouble.
First, as an aside, installation of the dependencies using conda (freshly installed and updated) on our instrument's Windows 10 Enterprise LTSC processing PC did not work using
conda env create --file MVRegFus/mv_environment.yml
(got stuck at solving environment, same withmv_environment_windows.yml
). I had to manually make an empty env (conda create -n mvfusreg python=3.7
) and then install the dependencies step by step for it to work. Could be because conda is now on python 3.9, but note thatconda env create --file MVRegFus/mv_environment.yml python=3.7
also didn't work.Second, and more importantly, now that the installation is complete I am trying to run it with the configurations as pasted below. Not sure if everything is correct; the input file is a single-timepoint, 3-color (1st channel DAPI), 3D stack with 3 views (each rotated by 120 degrees) acquired using ZEN 3.1 (Blue) version 3.1.0.00002. I'd like to register and fuse views 1 and 2 onto view 0.
Please let me know if there's something obviously wrong in the configurations. See below for the error I get when I run it.
Configurations
```python ########################## #### parameters to modify #### approx. in descending order of relevance ########################## # where elastix can be found (top folder) elastix_dir = r'D:\SWAP\Jonas\_software\elastix' # list of files to fuse filepaths = [r'D:\SWAP\Jonas\20221013 - LS_Z1 test - DAPI-Sox10nRmG-mitfaHCR\tp3_e1-1\raw\tp3_e1-1.czi'] # where to save the output out_dir = os.path.dirname(filepaths[0]) # e.g. same folder as first file in filepaths # channels to fuse channels = [0,1,2] channelss = [channels]*len(filepaths) # channel to use for registration reg_channel = 0 reg_channels = [reg_channel] *len(filepaths) # reference view for final fusion ref_view = 0 ref_views = [ref_view] *len(filepaths) # list of pairwise view indices to perform registration on registration_pairs = [[0,1], [0,2]] #registration_pairs = None registration_pairss = [registration_pairs] *len(filepaths) # optionally, specify the meanings of the indices # occuring in the list of pairs # this can be used to fuse illuminations independently # using view_dict, which is a dictionary containing # the indices as keys and dictionaries defining the indices as items, such as: # >>> view_dict[0] = {'view': 0 # view 0 within the file # 'ill' : 1 # illumination 1 within that view # } # another example: # >>> view_dict[0] = {'view': 0 # view 0 within the file # 'ill' : None # like this, both ills of this view are fused using the average of ills # } # another example: # >>> view_dict[0] = {'view': 0 # view 0 within the file # 'ill' : 2 # like this, both ills of this view are fused using blending weights # } # in case of treating ills as independent views: # - illumination 0 comes from left # - illumination 1 comes from right # - rotating in positive direction (in angles) # brings left to the front # so it makes sense to define the registration pairs like this: (view, ill) # (0,1),(0,0) # (0,0),(1,1) # (1,1),(1,0) # (1,0),(2,1) # etc. # four view example: view_dict = {i:{'view':i, 'ill': 0} for i in [0, 1, 2]} # if ills of all views should be averaged, set view_dict to None: #view_dict = None # how to calculate final fusion volume # 'sample': takes best quality z plane of every view to define the volume # 'union': takes the union of all view volumes final_volume_mode = 'sample' # whether to perform an affine chromatic correction # and which channel to use as reference perform_chromatic_correction = False ref_channel_chrom = 0 # binning of raw input from views (x,y,z) # [1,1,1]: no binning # shapes of views to be registered should not significantly # exceed ~(400, 400, 400) raw_input_binning = [5,5,2] # background level to subtract background_level = 200 # which binning to use for registration #mv_registration_bin_factors = np.array([1,1,1]) mv_registration_bin_factors = np.array([4,4,4]) # registration mode for pairwise view registration # (default is 2) # -1: only preregistration (translation, no elastix) # 0: only translation # 1: translation + rotation # 2: translation + rotation + affine pairwise_registration_mode = 2 # final output spacing in um mv_final_spacing = np.array([1.]*3) # options for fusion # fusion_method # 'weighted_average': weighted average of views using the given weights # 'LR': Lucy-Richardson multi-view deconvolution fusion_method = 'LR' # fusion_method = 'weighted_average' # fusion weights # 'blending': uniform weights with blending at the stack borders # 'dct': weights derived from DCT image quality metric # fusion_weights = 'dct' fusion_weights = 'blending' # options for DCT image quality metric for fusion # setting None automatically calculates good values # size of the cubic volume blocks on which to calc quality dct_size = None # size of maximum filter kernel dct_max_kernel = None # size of gaussian kernel dct_gaussian_kernel = None # weight normalisation parameters # normalise such that approx.Output and errors
``` (mvregfus) D:\SWAP\Jonas\20221013 - LS_Z1 test - DAPI-Sox10nRmG-mitfaHCR\tp3_e1-1\raw>python mvregfus_run.py http://localhost:8787 D:\SWAP\Jonas\_software\Miniconda3\envs\mvregfus\lib\site-packages\numpy\core\fromnumeric.py:86: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray. return ufunc.reduce(obj, axis, dtype, out, **passkwargs) These pairs of keys will be registered: [[0, 1], [0, 2]] They refer to the keys in 'view_dict': {0: {'filename': 'D:\\SWAP\\Jonas\\20221013 - LS_Z1 test - ' 'DAPI-Sox10nRmG-mitfaHCR\\tp3_e1-1\\raw\\tp3_e1-1.czi', 'ill': 0, 'view': 0}, 1: {'filename': 'D:\\SWAP\\Jonas\\20221013 - LS_Z1 test - ' 'DAPI-Sox10nRmG-mitfaHCR\\tp3_e1-1\\raw\\tp3_e1-1.czi', 'ill': 0, 'view': 1}, 2: {'filename': 'D:\\SWAP\\Jonas\\20221013 - LS_Z1 test - ' 'DAPI-Sox10nRmG-mitfaHCR\\tp3_e1-1\\raw\\tp3_e1-1.czi', 'ill': 0, 'view': 2}} INFO: setting registration degree to 2 (trans+rot+aff) Skipping groupwise registration! DECORATOR local... readStackFromMultiviewMultiChannelCzi producing D:\SWAP\Jonas\20221013 - LS_Z1 test - DAPI-Sox10nRmG-mitfaHCR\tp3_e1-1\raw\mv_input_view_000_000_v000_c02.zarr reading D:\SWAP\Jonas\20221013 - LS_Z1 test - DAPI-Sox10nRmG-mitfaHCR\tp3_e1-1\raw\tp3_e1-1.czi view 0 ch 2 ill 0 Binning down raw input by xyz factors [5, 5, 2] old shape: 640 1920 1920 new shape: 320 384 384 DECORATOR local... readStackFromMultiviewMultiChannelCzi producing D:\SWAP\Jonas\20221013 - LS_Z1 test - DAPI-Sox10nRmG-mitfaHCR\tp3_e1-1\raw\mv_input_view_000_000_v001_c02.zarr reading D:\SWAP\Jonas\20221013 - LS_Z1 test - DAPI-Sox10nRmG-mitfaHCR\tp3_e1-1\raw\tp3_e1-1.czi view 1 ch 2 ill 0 Binning down raw input by xyz factors [5, 5, 2] old shape: 473 1920 1920 new shape: 236 384 384 DECORATOR local... readStackFromMultiviewMultiChannelCzi producing D:\SWAP\Jonas\20221013 - LS_Z1 test - DAPI-Sox10nRmG-mitfaHCR\tp3_e1-1\raw\mv_input_view_000_000_v002_c02.zarr reading D:\SWAP\Jonas\20221013 - LS_Z1 test - DAPI-Sox10nRmG-mitfaHCR\tp3_e1-1\raw\tp3_e1-1.czi view 2 ch 2 ill 0 Binning down raw input by xyz factors [5, 5, 2] old shape: 477 1920 1920 new shape: 238 384 384 DECORATOR local... readStackFromMultiviewMultiChannelCzi producing D:\SWAP\Jonas\20221013 - LS_Z1 test - DAPI-Sox10nRmG-mitfaHCR\tp3_e1-1\raw\mv_input_view_000_000_v001_c01.zarr reading D:\SWAP\Jonas\20221013 - LS_Z1 test - DAPI-Sox10nRmG-mitfaHCR\tp3_e1-1\raw\tp3_e1-1.czi view 1 ch 1 ill 0 Binning down raw input by xyz factors [5, 5, 2] old shape: 473 1920 1920 new shape: 236 384 384 DECORATOR local... bin_stack Traceback (most recent call last): File "mvregfus_run.py", line 235, inAny ideas? 🙃