Closed edyoshikun closed 1 year ago
If we drag and drop the different positions separately into napari, we can see that the scaling was applied as well as the translation but now we have to slide the bar for positions.
Why would do you need to slide the bar though, it appears that they are loaded as duplicate layers so blending should work?
I dont need the slide bar. The slide bar appears when you drag and drop the two different positions into napari.
I modified your code a little bit to write the FOVs in the same well:
from iohub.ngff import open_ome_zarr, TransformationMeta
store_path = "test_translate.zarr"
tczyx_1 = np.random.randint(
0, np.iinfo(np.uint16).max, size=(1, 3, 3, 32, 32), dtype=np.uint16
)
tczyx_2 = np.random.randint(
0, np.iinfo(np.uint16).max, size=(1, 3, 3, 32, 32), dtype=np.uint16
)
coords_shift = [1., 1.0, 1.0, 10.0, 10.0]
coords_shift2 = [1., 1.0, 0., -10.0, -10.0]
scale_val = [1., 1.0, 1.0, 0.5, 0.5]
translation = TransformationMeta(type="translation", translation=coords_shift)
scaling = TransformationMeta(type="scale", scale=scale_val)
translation2 = TransformationMeta(
type="translation", translation=coords_shift2
)
with open_ome_zarr(
store_path,
layout="hcs",
mode="w-",
channel_names=["DAPI", "GFP", "Brightfield"],
) as dataset:
# Create and write to positions
# This affects the tile arrangement in visualization
position = dataset.create_position(0, 0, 0)
position.create_image("0", tczyx_1, transform=[translation])
position = dataset.create_position(0, 0, 1)
position.create_image("0", tczyx_2, transform=[translation2, scaling])
# Print dataset summary
dataset.print_tree()
And it works as expected either by dragging all the FOVs or opening from command line with napari test_translate.zarr/0/0/*
.
For multiple channels, there will be duplicates but I guess that's fine. Writing the multiple positions into one well works. I think that for the movies and good snapshots, we will just create a separate zarr store that can do some blending between positions.
What do you think @mattersoflight?
Writing the multiple positions into one well works.
I think this is a good solution for writing the reconstructed data and getting starting the analysis. When this reconstructed data is further analyzed, we will be stitching volumes, projecting data, overlaying channels etc. in some order. These reduced datasets need should be separate zarr stores, potentially not even ome-zarr.
I am trying to use the coordinateTransformation metadata from the
iohug.ngff.create_image()
and I am seeing a couple of issues that are possibly related so keeping them in the same issue.The first issue that it might be more of a napari issue is that we cannot apply individual coordinate transforms for multiple images within a position. After running this code it throws the error below, which means it expects some sort of pyramidal structured dataset.
Error:
Now, if we remove the second image
position.create_image("1", tczyx_2)
and open the positions in naparinapari --plugin napari-ome-zarr test_translate.zarr
we see the two (32x32) test images as expected, but without transformations applied to them .If we drag and drop the different positions separately into napari, we can see that the scaling was applied as well as the translation but now we have to slide the bar for positions.