PreibischLab / BigStitcher

ImgLib2/BDV implementation of Stitching for large datasets
GNU General Public License v2.0
66 stars 14 forks source link

Fusion of 2D images is blank #35

Open mpinkert opened 5 years ago

mpinkert commented 5 years ago

I have recently tried stitching some 2D grids using BigStitcher and I found out that they load into the dataset and can be visualized using the BigDataViewer, but that any fusion of them comes up blank. Images taken in 3D on the same system show up fine in fusions.

I have not been able to find a fusion setting which shows the 2D grid; I am maybe missing something, but if not, this could be a bug.

I have uploaded three separate examples of this bug. All of them are low intensity, so viewing them requires changing the minimum brightness and color to 0 for the BigDataViewer, or adjusting the Window/Level for the fused image.

The "Small 2D and 3D Dataset" folder holds a minimum example of this problem. One 2x2 3D grid and one 2x2 2D grid.

hoerldavid commented 5 years ago

Hi @mpinkert,

So I found the reason for this bug: When we fuse 3d images (by default, we treat everything as 3d), we only use intensity values at locations with min_z > z < max_z. For single-slice images, min_z == max_z and therefore, the results are blank.

We had already implemented a special case for 2d images, but that would only kick in if the images are 2d and their z-location is 0. In the case of your images, the location from metadata is not 0 and therefore, the special case did not trigger. I wrote a workaround that virtually moves the images to 0 if you are trying to fuse just 2d images: https://github.com/PreibischLab/multiview-reconstruction/commit/f2d78a12e504db31236063149c6c63d1ea757c7b. This should fix the problem when fusing just 2d images, but in the case of your mixed 2d/3d dataset, when fusing all images, the 2d slices would still be blank. We will include the fix in the next release.

In general the reason why we are not using the 2d slices in a 3d fusion is that all intensity values on the border of the images are the result of interpolation and we would have to treat them differently than interior pixels during the fusion process (essentially, we would have to "alpha-blend" those pixels and not just do a weighted average as we do at the moment). So fixing this will be a major rewrite of the fusion code. What are your thoughts on this @StephanPreibisch ? (On a side note, this probably explains the behavior you described in #38 as well, just as suspected)

I have been working on an accelerated fusion for translation-only datasets, in which I actually alpha-blend the border pixels: https://github.com/PreibischLab/BigStitcher/tree/fastfusion. It's still a bit experimental, but we will add it to the main release channel in the near future - using this, fusion of mixed 2d/3d datasets should work as well (but not for more complicated non-translation registrations (most interestpoint-based registration steps are 3d-only at the moment, as well, though) ).

Anyway, I hope that this at least explains the issue and the fix we wrote will solve the problem in "normal" 2d datasets.

Best, David

mpinkert commented 5 years ago

Hi @hoerldavid ,

This behavior makes a great deal of sense. I'll make sure to acquire a second Z slice for my application where I was mixing 2D/3D slices beforehand, so that the special case doesn't mix it up. Thanks for working hard at this!

Best, Michael