bigdataviewer / bigdataviewer_fiji

Fiji plugins for starting BigDataViewer and exporting data.
GNU General Public License v3.0
21 stars 9 forks source link

2D image defaults for ExportImagePlusAsN5Plugin #21

Closed K-Meech closed 3 years ago

K-Meech commented 3 years ago

For some 2D images, the default subsampling factors / chunk sizes are not optimal. e.g. for an 8-bit image of 8704 x 9044 pixels, with dimensions of 10.0320 x 10.0320 x 1 nanometer. The calculated downsampling factors are: [[1,1,1],[1,1,2],[1,1,4],[1,1,8],[2,2,16],[4,4,32],[8,8,64],[16,16,128],[32,32,256],[64,64,512]] and the block size of the first layer is [32, 32, 256]

Would it be possible to keep the z downsampling/chunk size at 1 for 2D images? Thanks!

tpietzsch commented 3 years ago

Thanks for reporting, I will fix it ASAP!

tpietzsch commented 3 years ago

With https://github.com/bigdataviewer/bigdataviewer-core/pull/119, this should be fixed. In your example. the downsampling factors are now [[1,1,1], [2,2,2], [4,4,4], [8,8,8], [16,16,16], [32,32,32], [64,64,64]] and the block size of the first layer is [512,512,1]

K-Meech commented 3 years ago

Thanks @tpietzsch! I'm probably missing something - but should the downsampling factor in z also stay at 1? e.g. [[1,1,1], [2,2,1], [4,4,1], [8,8,1], [16,16,1], [32,32,1], [64,64,1]] for 2D?

tpietzsch commented 3 years ago

should the downsampling factor in z also stay at 1

Hmm, that's a tough question.

First: in practice, if you look at the data in 2D only, it doesn't matter. The produced rasterised images are the same. Basically, for a [2,2,2] block, the input image would be repeated in Z, so the 2 in the Z dimension basically just averages two identical values.

The motivation for not taking 1 comes when we look at the data as thin sheets in 3D, and how they transform into the global space. Then downsampling with [2,2,1] etc makes the anisotropy in the data worse. With your 10.0320 x 10.0320 x 1 nm , you have anisotropic voxels with size ratio 10:1 in X:Z In the [2,2,1] downsampled image, you would have voxels sized 20:1 etc. Displaying this fully zoomed out (which would use the [64,64,1] factors), we have pixels that are 640 times larger in X than in Z. This will lead to severe aliasing artefacts.

I want the ProposeMipmaps algorithm to be as general as possible. For example, the "2D" code would also trigger for higher pyramid levels of thin datasets, e.g., if you have a 1000x1000x4 image, after the second downsampling it becomes "2D" for the purposes of downsampling. That's why I don't want to go with the "always 1" approach.

Now, from the above perspective, it would be actually even better to make the factors [[1,1,10], [2,2,20], ...] etc, because we end up with more isotropic pixels. But then, the first level should always be [1,1,1], and [[1,1,1], [1,1,10], [2,2,20], ...] wouldn't make sense for 2D images.

In summary:

After all, it is not super-crucial, because this is a suggestion, and the user always has the possibility to change it to something more meaningful in their use-case.

tpietzsch commented 3 years ago

AAAAARGH...!

There is one thing I forgot.

Downsampling factors [2,2,2] induce a 0.5 pixel shift in the source-to-global transformation (because the center of the downsampled pixel falls between the 2 source pixels). In combination with 1-pixel thick sources, this messes up linear interpolation, because it makes black out-of-bounds values bleed in.

Here is the two competing artefacts in action:

Downsampled with [2,2,1] etc has no interpolation-bleeding artefacts, but severe aliasing when viewed from the side. Screenshot 2021-02-10 at 16 38 18Screenshot 2021-02-10 at 16 38 00

Downsampled with [2,2,2] etc has no aliasing artefacts but black bleeds into the interpolation Screenshot 2021-02-10 at 16 48 32 Screenshot 2021-02-10 at 16 39 03

So... hmm... I'm not sure how to solve this dilemma, yet.

constantinpape commented 3 years ago

@tpietzsch could you clarify what happens when you apply downsampling with factor [2, 2, 2] to 2d data? If I understand it correctly, the image would actually not be down-sampled, in the z-dimension but upsampled; so if we start of with an image of extent [1, 512, 512] we would have [2, 256, 256] after the sampling.

Is this correct?

tischi commented 3 years ago

Maybe related to @constantinpape's question:

Basically, for a [2,2,2] block, the input image would be repeated in Z, so the 2 in the Z dimension basically just averages two identical values.

Is this the case because your out-of-bounds factory during downsampling is set to extend the border value to infinity?

tpietzsch commented 3 years ago

@tpietzsch could you clarify what happens when you apply downsampling with factor [2, 2, 2] to 2d data? If I understand it correctly, the image would actually not be down-sampled, in the z-dimension but upsampled; so if we start of with an image of extent [1, 512, 512] we would have [2, 256, 256] after the sampling.

Is this correct?

No, it would be [1, 256, 256]

Basically, for a [2,2,2] block, the input image would be repeated in Z, so the 2 in the Z dimension basically just averages two identical values.

Is this the case because your out-of-bounds factory during downsampling is set to extend the border value to infinity?

Correct, the input image is extended with border values to infinity. Then, in @constantinpape example, a [2, 512, 512] block would be used to create the downsampled [1, 256, 256]. The [2, ..., ...] is created by the border extension.

tpietzsch commented 3 years ago

@tpietzsch could you clarify what happens when you apply downsampling with factor [2, 2, 2] to 2d data? If I understand it correctly, the image would actually not be down-sampled, in the z-dimension but upsampled; so if we start of with an image of extent [1, 512, 512] we would have [2, 256, 256] after the sampling.

Basically: from the point of view of the produced pixel data, downsampling [2,2,2] and [2,2,1] have identical [256,256,1] results here. The difference is in the transform that is applied to the downsampled data when "back-projecting" it into isotropic world space. A voxel from the [2,2,2] version would be back-projected to have twice the Z-size than a voxel from the [2,2,1] version.

tischi commented 3 years ago

A voxel from the [2,2,2] version would be back-projected to have twice the Z-size than a voxel from the [2,2,1] version.

What we typically do is adjust the voxel size in z to what makes sense. For example sometimes we put a really large voxel size in z to have the image essentially infinitely thick.

Downsampled with [2,2,2] etc has no aliasing artefacts but black bleeds into the interpolation

I wonder now how choosing different voxel sizes in z would change the appearance (bleeding of black due to interpolation) of such a 2D slice as viewed from the side.

tinevez commented 3 years ago

Naive question: and if we select block size of [x x 1] each time?

tpietzsch commented 3 years ago

What we typically do is adjust the voxel size in z to what makes sense. For example sometimes we put a really large voxel size in z to have the image essentially infinitely thick.

This is a good idea.

I wonder now how choosing different voxel sizes in z would change the appearance (bleeding of black due to interpolation) of such a 2D slice as viewed from the side.

It will not change the bleeding of black in any way. The 0.5 pixel shift scales with the voxel size. However, choosing a really large voxel size in z

1) will nudge the mipmap proposal algorithm to always suggest [x,x,1] downsampling factors. (It will downsample only in XY until the downsampled voxels are roughly isotropic. Which with sufficiently large Z size is never...). Therefore: no bleeding artefacts.

2) it will also avoid aliasing artefacts interpolation, because when viewed from the side the voxels are not so thin.

So this is a nice workaround in practice 👍

Naive question: and if we select block size of [x x 1] each time?

It avoids the interpolation bleeding, but runs into the risk of aliasing artefacts because possibly the voxels get more and more anisotropic (illustrated in the screenshots). Combined with "setting artificially large voxel size in Z" it works nicely.

tischi commented 3 years ago

Combined with "setting artificially large voxel size in Z" it works nicely.

I am not even sure it is always "artificially". Maybe the visual impression one gets by choosing "just the right voxel size" is a scientifically almost correct one?

For example, for a single slice acquired with a confocal microscope, you would like to see a "fade out" along the z-axis corresponding the axial size of PSF of the microscope. I wonder whether the "bleeding of black" from the planes above and below would (more or less) mimic this effect, and thus give a (more or less) correct visual impression.

For ultra-thin section EM however, I don't know. There one would probably really want to see a super thin plane. But then in practice, it is super annoying to locate that plane in 3D (along the z-axis) to actually see anything and thus one would probably typically anyway go for a larger voxel size in z.