PreibischLab / BigStitcher

ImgLib2/BDV implementation of Stitching for large datasets
GNU General Public License v2.0
64 stars 14 forks source link

Prevent downsampling in Z direction while downsampling X and Y in Image Fusion #110

Closed Alsafadi closed 2 years ago

Alsafadi commented 2 years ago

Hi,

I have an image set consiting of 64 tiles (8 X 8 ) with 320 z-planes (z planes 10um apart). I am trying to do image fusion on the aligned tiles and in order to preserve memory usage I need to downsample a little bit. The image fusion GUI only gives a single option for downsampling.

If I use the option "Preserve original data anisotropy", the software will downsample the z stack to the same ratio as x and y, which is undesireable (will lose a lot of information). However, if I do not select this option, the z stack somehow will be over-sampled, which is strange. For the sack of testing, I did a fusion of 4 tiles (2X2) without the "Preserve original data anisotropy" option with a 3X downsampling. This resulted in a 2935 z-stacks, which is approximately 9X higher than my dataset. The strange thing is that each one of the resulting planes looks different, so the software somehow computed the planes in between the real ones. Can you please comment on how this works?

Finally, if there is a way to only downsample XY without altering the Z, I would appreciate any tips!

Thanks,

Hani Alsafadi

hoerldavid commented 2 years ago

Hi @Alsafadi ,

Can you also tell us what the x&y pixel sizes of your data are? And please tell us what you see when you click "Info" in the main window of BigStitcher (upper right) - you should see whether the pixel sizes (calibration) are correctly set within BigStitcher as well.

A little explanation: The default behaviour ("Preserve original data anisotropy" OFF) will oversample so the pixel size in the fused image ist the same in every dimension (this is useful when fusing multi-view lightsheet data where you can actually get isotropic data from images from multiple angles). The pixel size in that case will be the smallest pixel size of all input pixel sizes (e.g. if you have 200nm x 200nm x 2micron (xyz), the result will have 200nm in all dimensions). If you set "Preserve original data anisotropy" ON, the fused image should have the same pixel sizes as the original images.

I think you can solve the problem by checking the calibration, but another quick fix to get "custom downsampling" would be to add a transformation that scales your images along a specific dimension (In Multiview mode, right click on your images, select Apply Transformation(s) and then apply an Affine transformation). You actually have to specify a transformation matrix, but for scaling, you can just adjust the diagonal elements, e.g. to scale in z by 0.5:

image

Hope this helps! Best, David

Alsafadi commented 2 years ago

Hi @hoerldavid,

Thanks a lot for your quick response. My pixelsize data are: Dimensions: 2048 x 2048 x 320px Voxel Dimensions: 0.3623319685495851 x 0.3623319685495851 x 10.0µm

Thanks a lot for your explanation. It does make sense for when angles are applied differntly to tiles. Which is not in my case, I only have alignment in the xy plane.

I will try the quick fix you have proposed, and will report if that worked for my application.

-- I am not sure, but I have a vague recollection that image fusion had an option to downsample the z at a different multiplier than the xy. but I can not seem to find that now.

Best regards,

Hani Alsafadi

hoerldavid commented 2 years ago

Hi @Alsafadi ,

So just to understand your goal: You want to downsample in xy, but keep z at the original 10micron spacing?

One potential solution that might also work is to do the Fusion with 3x downsampling but z-oversampling (anisotropy preservation OFF) and fuse in Virtual or Cached mode and display the resulting image in ImageJ. The resulting image would still be oversampled in z, but the planes are calculated on-the-fly. You could then scale the image in Fiji using Image -> Scale with interpolation set to None - that way you would essentially drop the oversampled planes and never actually compute them.

Best, David

Alsafadi commented 2 years ago

Hi @hoerldavid, Thanks a lot.

The issue for me was that when the downsampling happened while anisotropy preservation was ON, I lost z-planes in the fused image and it was not recoverable. Regardless of the z-step, the information on each of the input plane is what generates the image.

In any case, I followed your suggestion and it work.

  1. I applied affine transformation and increased the z-scale by a multiplier of 3. image

  2. I did image fusion with (anisotropy preservation was ON) and downsampling of 3. image

  3. This resulted in exactly what I needed, a fused image with identical number of z-stacks as the input tiles. This worked like a charm.

Thanks a lot for the help,

Hani

PS. having an option to not downsample the z in fusion would still be nice without having to do a workaround.

hoerldavid commented 2 years ago

Hi @Alsafadi ,

Glad to hear you could solve the problem with the provided workaround.

But I agree that we should think about providing the option to do different downsampling in xy vs z. The UI is already quite complex, so we would have to think about how to integrate it seamlessly. Do you have thoughts on this @StephanPreibisch ?

Best, David