Hoping someone can help us understand the pixel spacing in fused TIFF stacks produced by BigStitcher. In short the pixel spacing defined in the TIFF stack seems to be wrong, but perhaps there is a reason it's this way.
Our light-sheet systems save raw datasets to HDF5 / xml files. We then fuse these datasets in BigStitcher to TIFF stacks. In this case each raw camera frame is angled, so our volumetric tiles are parallelograms. This means that the voxel spacing in the raw data is not the same in all dimensions and the voxels are not on an orthogonal grid.
When we open the raw data in BigStitcher, BigStitcher shows a coordinate system in “BigStitcher units”. My understanding is that BigStitcher units are isotropic, and in the case of non-isotropic voxel spacing 1 BigStitcher unit is equal to the smallest voxel dimension. In other words if our voxel size in um is <0.376 0.2417 0.376>, then the BigStitcher coordinate (1,1,1) is equal to a coordinate of (0.2417um, 0.2417um, 0.2417um). We’re dealing with 2048px, 770 um wide frames. The frames appear as 3186 BigStitcher units wide, so this checks out: 3186 * 0.2417 = 770um
After fusing the dataset to a TIFF stack, BigStitcher resamples the volume onto an isotropic, orthogonal grid of voxels. Based on the number of pixels in each TIFF, the sampling of the fused dataset matches the size of the BigStitcher units. From the above example, each TIFF in the fused stack is 3186 px wide, suggesting the pixels should be 0.2417um. However, the sampling reported by FIJI is 0.2129um, or 12% smaller (from Image > properties). Likewise the frame size reported by FIJI is off by 12% (678um). Where does the pixel size in the fused TIFF stack come from? Is it indeed wrong or is there something else going on? Thanks for any insight!
This also seems similar to this issue from several years back.
Hi all,
Hoping someone can help us understand the pixel spacing in fused TIFF stacks produced by BigStitcher. In short the pixel spacing defined in the TIFF stack seems to be wrong, but perhaps there is a reason it's this way.
Our light-sheet systems save raw datasets to HDF5 / xml files. We then fuse these datasets in BigStitcher to TIFF stacks. In this case each raw camera frame is angled, so our volumetric tiles are parallelograms. This means that the voxel spacing in the raw data is not the same in all dimensions and the voxels are not on an orthogonal grid.
When we open the raw data in BigStitcher, BigStitcher shows a coordinate system in “BigStitcher units”. My understanding is that BigStitcher units are isotropic, and in the case of non-isotropic voxel spacing 1 BigStitcher unit is equal to the smallest voxel dimension. In other words if our voxel size in um is <0.376 0.2417 0.376>, then the BigStitcher coordinate (1,1,1) is equal to a coordinate of (0.2417um, 0.2417um, 0.2417um). We’re dealing with 2048px, 770 um wide frames. The frames appear as 3186 BigStitcher units wide, so this checks out: 3186 * 0.2417 = 770um
After fusing the dataset to a TIFF stack, BigStitcher resamples the volume onto an isotropic, orthogonal grid of voxels. Based on the number of pixels in each TIFF, the sampling of the fused dataset matches the size of the BigStitcher units. From the above example, each TIFF in the fused stack is 3186 px wide, suggesting the pixels should be 0.2417um. However, the sampling reported by FIJI is 0.2129um, or 12% smaller (from Image > properties). Likewise the frame size reported by FIJI is off by 12% (678um). Where does the pixel size in the fused TIFF stack come from? Is it indeed wrong or is there something else going on? Thanks for any insight!
This also seems similar to this issue from several years back.