Open nbathreya opened 1 year ago
Assuming you are talking about HCS stores, there are three ways possible:
position: iohub.ngff.Position
array: np.array
# write
position.zgroup["name_of_array"] = array
# read
array = position.zgroup["name_of_array"]
Where NGFFNode.zgroup
is the underlying Zarr group for the NGFF node (plate, position, etc.). Refer to Zarr docs for API details.
Masks can also be stored as NGFF labels arrays. There is currently no specific API for it, but can be achieved by:
from iohub.ngff_meta import LabelsMeta, LabelsColorMeta, ImageLabelMeta
label_group = position.zgroup.group("labels")
label_group["name_matching_image/0"] = label_array
# initialize label metadata
labels_meta = ...
image_label_meta = ...
label_group.attrs["labels"] = labels_meta
label_group["name_matching_image"].attrs["image-label"] = image_label_meta
If you find this preferable we can think about implementing this explicitly in iohub.
Thanks for alerting me to this issue @ziw-liu. Yes, It will be useful to have an API to add the label group and link it to the matching image for our work in virtual staining and segmentation of nuclei and membrane. Let's do this during hackathon and document it.
If the user calculates Max Intensity Projection on every channel for every FOV, along with maybe some segmentation results, what would be the best way to store them in the new version of zarr store?
The raw microscopy data TCZYX dimensions are not the same for the max proj and binary mask data.