Closed miketaormina closed 3 years ago
Okay, I have a little bit better idea of what's going on and will leave a note here before closing this issue. This data was collected on a diSPIM in the stage-scanning mode, and so has an inherent skew along that axis. I have been compensating for this via the "pixel shift" method when building the hdf5 that BigStitcher sees (because there is no interpolation this method leads to cleaner data by my eye).
The result, however is that the actual position stacks contain large regions of empty pixels which end up overlapping adjacent regions that contain real data. Not sure of the specifics, but this is presumably causing the fusion algorithm to result in this effect, which is not present in data that has not been de-skewed. This is perhaps not surprising in hindsight.
For a data processing pipeline that includes a shear transformation in the data, therefore, this operation should be performed either via a transformation matrix (and therefore interpolation) or after fusion (if a pixel shifting method is used). I'm not sure how many people employ the shifting method rather than an affine transform, but since many people are lately using versions of acquisition systems that have a skew in the raw data, I hope that people will find this comment and avoid the confusion that I encountered here.
I have been using this tool with very good results, but have recently run into an issue when fusing images that are tiled in both x and y. The image below shows the intersection of four tiles and was fused with the
smooth blending
option checked in the ImageJ plugin. While the horizontal overlap region shows good blending, the vertical overlap region is brighter than even the summation of the contributing tiles. Tiles are all in the same channel and illumination and manually aligned.